I'd love to hear about what interesting problems (technically or otherwise) you're working on -- and if you're willing to share more, I'm curious how you ended up working on them.
Thank you :)
Specifically, I spend a lot of time thinking about and writing embedded software. The aircraft is fully autonomous and needs to be able to fly home safely even after suffering major component failures. I split my time between improving core architectural components, implementing new features, designing abstractions, and adding testing and tooling to make our ever-growing team work more efficiently.
I did FIRST robotics in high school where I mainly focused on controller firmware. I studied computer science in college while building the embedded electronics for solar powered race cars, and also worked part time on research projects at Toyota. After graduating with a Master's degree, I stumbled into a job at SpaceX where I worked a lot on the software for cargo Dragon, then built out a platform for microcontroller firmware development. I decided to leave SpaceX while happy, and spent a couple years working on the self driving car prototype (the one that looked like a koala) at Google. Coming up on my third year, I was itching for something less comfortable and decided to join a scrappy small startup with a heart of gold. Now it's in the hundreds of employees and getting crazier and crazier.
I wouldn't say I am qualified for a job related to embedded programming (even though I know how to code and it's my job) but even if I was there wouldn't be any opportunity for me to bounce between companies like those in a million years.
PS: Sorry for the spelling, not native.
The answer to your question is: Yes.
* higher purpose project saving lives and helping people: check
* well-engineered, reliable, smart solutions quadrupling efficiency: check
* planes, Africa, autonomous flight, landing and take off hacks, app based pre-flight checks, oh my!
* all for peaceful, life-supporting, humane purposes!!!
Your path to where you are now is crazy...sounds like you've had an exciting career!
Where did you get your masters? I have an EE & CS B.S. from RPI plus 3 years of application development experience in the Fin Tech industry. I am strongly considering swapping industries to Embedded Control--that is what I enjoyed most in college--but I am unsure how to break into the industry. Do you recommend a masters or just sending some apps out? I have a good deal of C++ and micro-controller experience, but none commercial.
There are tons of embedded software projects that lack software engineering rigor. If you're good at unit testing and mocking, for example, there's no reason why you can't unit test embedded code. Applying general software engineering practices to embedded code (effectively) is a good way to differentiate yourself.
About a year ago, I started spending more time researching about climate change. I learned how important energy storage will be to enable renewable energy to displace fossil fuels. The more I read, the more fascinated I became with the idea of building underground pumped hydro energy storage. I found a research paper from the U.S. DOE written in 1984 showing that the idea was perfectly feasible and affordable, but it seems that nearly everyone has forgotten about it since. (they didn't build it at the time because the demand wasn't there yet. Now energy storage demand is growing exponentially)
A year later, I'm applying for grant funding to get it built. I know that nearly everyone will tell me I can't do it because this or that reason. Because people don't like change and they're scared of big things even if the research shows it makes perfect sense. But I'm doing it anyways because no one else is getting it done. The idea is too compelling and too important to ignore. So here goes nothing!
Here are two recent startups in the field with multi-million funding. They were serious approaches. Many people involved with good planning, etc. They still fizzled out when it came to installing their first capacity.
I believe a reason for the funding problems is the high uncertainty for the economics of storage. Electrical energy is traded in a market. And your trading strategy in the market has a big impact on whether you earn money. Without solid numbers for energy storage and the expected trading outcome, investors will have a hard time.
Rather than a way of making money, I like to think about flexibility and fast dispatching as an enabler for way more renewables to come online, and that is crucial for the human race right now.
I think there's a big gap for both types of storage - fast dispatching for intraday demand variations, replacing gas peakers, and more static storage as in the OP for multi-day gaps in renewable production such as periods of high pressure during winter when wind speeds and irradiance are both very low.
Can't wait to see how this market develops.
I wish you luck with your endeavor. I think you are correct that the problem of energy storage is the most important one to solve to allow renewables to really take off for general power supply.
I also believe that the efficiency of storage is secondary to a degree as renewables can supply an enormous amount of energy. So loss from pumps or energy conversions are nearly insignificant. At least at the moment where energy storage is in such a bad shape. Certainly a lot more promising than having batteries everywhere.
I don't have his spreadsheet anymore, but it seemed really solid.
How much water do you have to move to power a house, anyway? It must be a lot - a truck pulling a tank of water goes up a hill as part of a journey without worrying about running out of gas, and they could be carrying 40,000 litres or 40 tons of water. Presumably there is no way you could move enough water at home to make a hydro plant - pump it up at night with cheap electricity and run it down at peak time to save money?
It's really cool to see a feasible alternative to batteries. I think climate change is the single most important problem anyone can be working on right now -- amazing that you've found such a massive lever to pull on this issue.
It's a fair point but I think that you oversimplify the pumped hydro case. Pumped hydro also has quite some electronic components (turbines/motors), electronics, and other moving parts (valves, overflows). I can imagine some of these components require semi-regular maintenance (hence the maintenance shaft in the diagram on the website of OP).
You'd really have to run the numbers to see which costs more to maintain in the long term.
When I say "per unit," I just mean that a huge battery is made up of many cells that will need to be replaced individually, whereas large pumped hydro facilities are still only a small number of total reservoirs.
The pump/turbine technology we use is the same that's been used for a hundred plus years for traditional pumped hydro dams, and the maintenance cost is very low. The life of a project is 40+ years. And in reality it can be 100 years with relatively low amount of maintenance. The San Diego County research posted on our website has good figures on this. thanks!
In reality, when Li-ion batteries are installed in a large system, I believe the round-trip efficiency is quoted much closer to PHS. Sorry, i can't find the best research to cite right now, but here are a couple sources i found with a quick search.
"lithium-based ESS rated for two hours at rated power will have an AC round-trip efficiency of 75 to 85%."
"Conversion round-trip efficiency is in the range of 70–80%"
This one says 90-95%
I've heard that Li-ion installations can get up to 90-95% round trip, which is fantastic, and better than PHS for sure. But it's not the most important detail in the equation. Here's why:
One thing to remember is that power is lost all over the system in conversion and transmission. So raw efficiency can be less important than getting the right capacity to the right place on the grid. And that brings us to cost.
Even though PHS is a little less efficient than Li-ion, 85% for PHS is still really good. (see other my other comment below about 70% vs 85%) And the math shows that investing in PHS is simply cheaper -- even after assuming that Li-ion will drop in price by 3x in the coming decades. This is partly because Li-ion has a much shorter life span and needs to be replaced about every decade.
Li-ion is still great and super important! But it's not looking like the best contender right now for grid-scale storage.
> I learned how important energy storage will be to enable renewable energy to displace fossil fuels.
The above is a reasonable statement, however, your website says the following:
> We can’t quit carbon without energy storage. To stop climate change, renewables must replace fossil fuels.
> Without energy storage, renewables will fail to reach even 25% of the energy market by 2040. This will cause global temperatures to rise over 3°C, a level which will cause catastrophic climate damage.
Those are not only misleading but outright lies. Now, I won't hide my bias here: I work on nuclear fission. But here's the reality: there are many possible pathways to net-zero carbon and limiting global temperature rise to well below 3°C (below 1.5°C in fact)
To just list a few:
* Massive adoption of nuclear fission alone
* Development & massive adoption of nuclear fusion alone
* Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration
* Shift from fossil fuels to renewables + storage (probably not alone)
Or any combination of those, in addition to a number of alternative approaches.
Edit: Also, it should be noted that the energy sector alone only represents about 1/5 of the emissions problem. In order to get to net-zero GHG and stop anthroprogenic climate change, the clean energy sector needs to expand well past the current global TPES and supply net-zero electricity that allows for the decarbonization of the other main contributors:
Agriculture, steel+cement+plastic, transportation, buildings&appliances, and flora loss leading to lost carbon stores (deforestation etc)
Even if renewables and storage could supply 100% of our electricity or even total power supply, you would still only be 1/5 done solving climate change. There is no unitary solution.
Acting as though renewables are necessary, instead of one of multiple options, is denial or malicious. In reality, renewable energy is nowhere near capable of reliably and safely taking on a large portion of our energy supply globally. It is expensive (you can make claims about unit cost, but what really matters is country-scale - look at German electricity prices vs. just about everywhere else), it is dangerous, it takes a lot of land area, and it is the least reliable.
I don't want to spend a lot of time here stomping on renewables, but there is plenty of reason to, and my main point is that I feel it is unjust and immoral for you to claim that renewables "must replace fossil fuels" if we are to stop climate change. It's just not true, and you need to admit that.
The energy industry is arguably one of if not the most important backbone of our modern society, and responsible for the safety and health of billions of people. Whether you're working on the generation or storage side, it is all our responsibility to be honest and make true claims - not to spread biased misinformation when it benefits your particular solution.
I'd like to finish by making it clear I'm very happy you're working on your tech and I hope you succeed in making it the best it can be - renewables are certainly trending to higher adoption and we need reliable, efficient, scalable storage solution in order to avoid dangerous outages and other grid issues.
You bring up valid criticisms of existing solutions, although I do think you should also be fair to those. Most things in life are a trade-off: maybe pumped hydro is a better majority solution for the grid, but lithium ion is an incredibly important, successful and expanding technology that needs to be given credit for its wide range of great applications.
I hope this response has not been inflammatory: I just very much care about maintaining a truthful public discussion around energy. I wish you the best of luck, and I hope you can take something useful from this.
If you want to send me research supporting some of your thoughts here then I'd love to see it. I do know for example that it's a very valid debate whether or not nuclear has a place in our climate fight.
I'll try to re-work the language in my materials to make sure I'm not excluding other valid viewpoints. Thanks!
but none of the below are renewables
* Massive adoption of nuclear fission alone
* Development & massive adoption of nuclear fusion alone
* Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration
Parent is primarily disputing: "To stop climate change, renewables must replace fossil fuels." and if renewables fail, "this will cause global temperatures to rise over 3°C"
I wonder what the percentage would be like if the energy sector needs to provide enough energy to replace all fossil fuels. It's certainly much higher than 20%.
I'm hoping fission can scale to about 2 EWh annually in the next several decades. Should be noted this is quite aggressive scaling. 500 PWh is more than enough to reach net-zero emissions.
> look at German electricity prices
True, pretty expensive. But they also include capital for investment in energy infrastructure. Such as building lines to get power from the north (high production) to the south (high consumption). The implementation tends to slow, but there are other reasons for that.
Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.
So fission can be utilized, but I doubt that Germany closing plants was a terrible decision.
There are 3 fission fuels occurring in nature: Th232, U235, U238
Actually, our reserves of Uranium are greater (by energy available to generate) than all of our Coal, Oil and Natural Gas reserves combined.
Our Thorium reserves are even greater than those.
In fact, Thorium is extracted as a byproduct of Rare Earth Metal extraction, and so we currently mine enough Th232 per year to replace the entire global energy and fuel industry
even though there is no demand for Th232 extraction. Kind of mind blowing.
> [fission] it is expensive in deployment
I don't see where this idea comes from - in real life, regions which are powered by more fission have significantly cheaper electricity than those who are powered by less.
> the problem with nuclear waste
I genuinely don't think there is a problem with nuclear waste, and that this concern is a myth / misunderstanding based on a mix of fear-mongering via conflation with nuclear weapons and a lack of comparison.
Consider the following: all energy sources have waste products - nothing is 100% efficient.
Fossil fuels pump literally billions of tonnes of toxic gas into the air as their waste product. It moves around, we can't store it, and it is responsible for the deaths of millions of people each year through air pollution.
Renewables production has the same issue (although different gases), and also tends to pollute the water and local environment with other toxic chemicals and metals.
Nuclear fission produces the most dense, least amount of waste of any source, which is solid and easy to manage. We know where quite literally all of it is, and it doesn't hurt anybody or negatively affect the environment in any way as long as you keep it store somewhere.
As far as I'm concerned, nuclear energy does not have a waste problem, it has a waste solution. Global warming is the problem with energy waste, more specifically it is the problem with hydrocarbon waste.
> Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.
Agree with you. Renewables tend to vary in effectiveness based on location - in those locations which are well-suited for them, I think they should be used! Though I'm not sure what you mean by "could open up more possibilities" - we've had hydro power for thousands of years.
> I doubt that Germany closing plants was a terrible decision.
Note the following excerpt from Mike Shellenberger on Twitter:
Germany’s renewables experiment is over.
By 2025 it will have spent $580B to make
electricity nearly 2x more expensive & 10x
more carbon-intensive than France’s.
The reason renewables can’t power modern
civilization is because they were never
A major new study of Germany's nuclear
- it imposes "an annual cost of
- "over 70% of the cost is from the 1,100
excess deaths/year from air pollution
from coal plants operating in place of
the shutdown nuclear plants"
And Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.
Data is for overall efficiency, not power production.
And Shellenberger is a nuclear lobbyist for that matter and his statements should be scrutinized. I am not fully content with the decision to make such a cut for fission power generation, but all these numbers are conjecture.
I think it is extremely foolish to make caricatures of people. Twenty years ago, Elon Musk was a software startup guy who had no idea about anything hardware - but that's only because nobody bothered to consider the full human behind the caricature.
Mike Shellenberger was an anti-nuclear activist for much of his early life and has always been (and is still) an environmentalist. Furthermore, he may be a lobbyist now (I'm not sure if you are right or wrong), but he ran for governor of California a few years ago. He has been very explicit in explaining his reasoning for shifting from anti-nuclear to pro-nuclear in multiple talks and articles.
Take a look at the full human, and your justification for scrutiny fades away. Everybody should be scrutinized to an extent, but he is not fundamentally a biased lobbyist with financial incentives.
> Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.
This is the problem, man. Germany has spent hundreds of billions of dollars on renewables and they still have high GHG emissions - all they have to show for their massive spending is a couple thousand extra deaths per year and higher electricity prices.
If you gave my company the same amount of money, we'd have the entire world to net zero emissions within two decades.
Goes to show the inefficiency of government funded programs, and the awful incompatibility of renewable energy with a reliable, affordable consumer electricity market.
> I like to use current numbers, because extrapolating development is often pretty close to lying.
We can use current and past numbers: for its entire existence, nuclear fission has been the (a) safest, (b) highest fuel density, (c) least waste-producing, (d) lowest emissions, (e) most reliable mass energy source humanity has ever had.
The new generation of reactors will only improve this divide between fission and everything else. If you are against extrapolating development and want to rely on established numbers, you must conclude [fission > renewables]
I know I'm biased, but I'm also right about all those superlatives.
Again, the most promising option would be to simply attach our installation to an existing reservoir. We don't use any additional water, we just borrow it. For an ample sized reservoir, each cycle would just raise and lower the water level by an inch or so. Another promising option is that we can even use the ocean as an upper reservoir. Salt water can be accommodated -- See our notes about the Okinawa Yanbaru Station.
There are more details in our white paper posted on the website.
But it's certainly not out of the question to go deeper instead.
Then the total energy capacity is V * rho * g * h, so that energy store is proportional to height, while tunnel boring price is roughly constant as long as tunnel boring volume of the reservoirs is much larger than the volume of vertical shafts.
I realize its a bit oversimplified but if we consider 2 prices: p1 price per volume for boring horizontally (for reservoirs), and p2 price per volume for boring vertically, then increasing the reservoir size by a volume delta V, requires boring 2 * delta V (upper and lower reservoir), while boring vertically the difference in height depends on the diameter...
If you can remove the mountain you could scale this out to every one in the world and single handedly solve this problem.
There will be other problems to overcome but someone will figure it out why not you? I wish you all the best in this very important effort.
affordable, but efficiency is so-so. 70-80% according to Wikipedia 
And actually, we think that 80-85% round trip is more accurate for our projects because we'll use the latest/greatest tech (variable-speed reversible francis style pump/generator turbines). I think the 70% from these figures is citing older projects with pump/turbines that were not quite as efficient.
Say what? So-so? 70-80% efficiency sounds pretty damn amazing!
How can I help?
The challenge is obviously scaling, since every municipality is different. For now it's going to cover my region and we'll see from there.
I tried something similar but mostly to figure where the land is being purchased recently in a region. But then land/parcels/addresses system is all over the place and, even that info is not consistent across cities.
have you looked at data providers who may have this data?
Both sound like interesting problems, and it would be awesome if municipal-level land data was available at scale.
"Forests within 100 miles"
"High rises within 10 miles"
"Anything within 0.5 miles"
In my one county alone there are 90+ municipalities, each with it's own Planning and Zoning Commission, and most with their own (varying level of) website. I'd say 5-10% don't have a website either.
In your situation, how are you getting data for when land is up for sale/zoning etc.?
DNS is currently centralized and controlled by a few organizations at the top of the hierarchy (namely ICANN) and easily censored by governments. Trust in HTTPS is delegated by CAs, but security follows a one of many model, where only one CA out of thousands needs to be compromised in order for your traffic to be compromised.
We're building on top of a new protocol (https://handshake.org, launching in 7 days!!) to create an alternate root zone that's distributed. Developers can register their own TLDs and truly own them by controlling their private keys. In addition, they can pin TLSA certs to their TLD so that CAs aren't needed anymore.
I wrote a more in-depth blog post here: https://www.namebase.io/blog/meet-handshake-decentralizing-d...
There is an issue though — the auction system gives early advantage in buying names for cheap. If only 100 people are buying names on day 1, they’ll be able to buy a lot of the names without competition. Handshake has a mechanism to prevent this. Names are released for bidding over the first year so that people who learn about it six months late can still register good names. The release schedule is basically a hash(name) % 52 to determine which week you can start registering any name.
While in college (CS & Math), I got heavily interested in growing food in the most efficient and healthiest way possible. I was a dreamer when I started so I thought more of how to grow 'earthly' produce on Mars, but then I realized that my own planet Earth is so massively underserved.
It's basically like this- I mastered growing leafy greens in indoor closed environmenet, then I tried to cover all the major physical and biological markers, then I try to optimize the most optimal levels of 5-6 variables (currently) that I can fully control and may produce the best phenotype- CO2, O2, Light, Nitrate, P, K. These parameters have their own sub definitions.
So far I have had great results. I am trying to raise investment so I can finally make it a reality. Check the numbers here: hexafarms.com (no fluff)
How's the taste?
Not denying it's possible to grow food very efficiently indoors but a vastly oversimplified opinion is that plants need sunlight to be tasty. Is this wrong?
Yes, you don't really need sunlight whatsoever. I was myself shocked until I recalled high school biology concept of genotype and phenotype i.e. the genetic structure that manifests itself given the right physical conditions (at least of plants.) As for the plants' nutrients, here's a classic- Teaming with Nutrients: The Organic Gardener's Guide to Optimizing Plant Nutrition, by Lowenfels. I was amazed to find how complex, yet simple plants are.
and Beanstalk (a YC company) https://www.beanstalk.farm/
Thanks for the references.
Hab 1 had aquaponics and fish, not sure where Hab 2 is going to look like as they haven't shared much but he's just started churning videos out again the past month or two.
It's a really neat project, I just hope he continues to show as much as he did with Hab 1 now that he's part of a startup.
Is it possible to setup a 'microfarm', similar to a window fridge appliance, in a part of an apartment room?
I'm ok with some manual work every 2 days, such as filling in a water container.
Besides water & substances, how much electricity would this use to grow a generation of leafy greens, per kg of produce?
Thanks for working on this!
Are your farming systems fully automated? If so, has that been more of a software challenge, or more of a mechatronics challenge?
My vision is to have distributed farms (as opposed to conventional wisdom, i have found that smaller indoor farms will be more profitable) every eight blocks or so.
Not really- It's quite manual (as of now). I had to change my country almost three times since I started; so I'm rather focussing more on data, and training algorithms part to figure out the right parameters (and the farm is a just a testbed). One example would be to have a $5 camera for measuring growth than buying a $100 3D what not camera.
I love this! Makes me happy to see someone's working on such an interesting problem that would benefit many.
For feedback, I believe using photographs of the leafy greens would be effective in communicating your vision.
I actually graduated from college this year; and for personal reasons I've had to change countries; now I'm in another Master's program... ready to drop out anytime. The whole project has been dead for months once in a while! I'm more trying to leverage ML for optimizing things. I guess that's what modern farming is missing (not ML per se, but optimization).
I'm trying to raise some investment (or in the worst case bootstrap and risk everything in the next few months), then I will go crazy with the idea.
I'd love to hear how you'd use it.
> At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.
Why not send each received mail individually? If you aggregate them first, it makes it very difficult to reply to individual messages with standard email clients.
I use a similar but far less fancy approach with email filters: I have everything put into its own filtered folders then only check them on a schedule.
Your approach is good because the schedule is right there in the email address.
Food for thought.
I mean use email@example.com
And have an admin ui that lets you set "firstname.lastname@example.org" =>"weekly"
Then you don't even need a website.
Across all platforms (not just Reddit), people including myself like to save/bookmark interesting content in the hopes of getting some use out of it later. The problem arises when you start accumulating too much content and forget to ever check that stuff out.
I'm working on a solution to help resurface Redditors' saved things using personalized newsletters. I'm calling it Unearth and users get to choose how frequently they want to receive their newsletter (daily, weekly, or monthly). The emails contain five of their saved comments or things and link directly to Reddit so that when viewing it, they can then decide whether or not to unsave it.
Basic functionality is all there, just needs some more styling and the landing page could be spruced up.
Kinda different, kind of the same but i'd love to use an app with much better search than the 'direct search' currently in most aggrogrator/ note apps. If i searched 'quotes' it would rip out and return all the things in italized, in quotes, or things that the algorithm deems as quotes based on it's scrape of the internet; Kinda like google but 'personal search' based on my notes, articles, all my different emails (work, and my 37 different gmail accounts) and websites I frequent (like reddit, hacker news comments, etc.) There was an HN article the other day that got me thinking about this problem, but i can't seem to find it. However, it approached it from a much deeper technical level, utilizing emacs and searching through his code. If you could bring that into an easy to use, consumer facing GUI I think it'd have potential to be pretty game changing.
'Personalized Search, and we don't have to steal your data because you willingly give it to us' - Google
And that's a really interesting idea regarding search. Would love to see the HN thread/article you mentioned to get a better understanding of the concept. As of now, Unearth's only focus was on active content resurfacing, but I've seen many Redditors mention the wish to search their saves as well so I think I'll look more into this.
Appreciate the ideas, keep them coming.
I would need to figure out how injection would work for saved comments, do you have any ideas? I'm definitely going to save this idea so thank you!
Feel free to try it out and let me know what you think or if you have any suggestions.
For privacy you needn't require the reddit ID of your users. Simply that they want to save something from reddit to their tryunearth.com account.
> Simply that they want to save something from reddit to their tryunearth.com account
When you say that, I envision the extension overriding or extending Reddit's save button functionality by making an API call to the unearth backend. Is that kinda what you had in mind?
I take a ton of notes on Notion, but I worry that I'll never see most of them again. Maybe part of the value is just in writing the notes in the first place...?
Kudos for solving out this problem for Reddit!
Why not call it Digg?
It's funny how we're all working from different definitions of the word "problem" - I'm certainly not changing the world with medical supplies for developing countries, renewable energy, payment systems and so on.
But it's something I'm really passionate about, and I'd be over the moon if I came anywhere close to the picture I have in my mind.
Back when I was studying German and Chinese, I would spend hours and hours on rote practice with little to show for it.
My brain almost felt like it was on autopilot - the eyes would read the words and the hands would write the sentences, but the neurons weren't really firing. It didn't feel like I was properly building the synaptic bridges necessary to actually use those words in conversation.
On the flipside, after just 20 minutes speaking with a tutor, my proficiency would improve leaps and bounds. Being forced to map actual, real-world thoughts/concepts to the words/expressions I had learned - that's what made everything clicked. It felt like the difference between just reading a chapter in a maths textbook, and actually doing the exercises.
So after keeping track of progress in NLP and speech recognition/synthesis in recent years, it seemed like a logical time to start. Progress is slow/incremental, but it is there.
2.) Porting my Python code for nonlinear gradient driven optimization of parametric surfaces to C++. Includes a constraint (propagation) solver based on Minikanren extended with interval arithmetic for continuous data (interval branch and contract). This piece is a pre-processor, narrowing the design space to only feasible hyper-boxes before feeding design parameter sets (points in design space) to the (real valued) solver. Also it does automatic differentiation of control points (i.e. B-spline control points) so I can write an energy functional for a smooth surface, with Lagrange multipliers for constraints (B-spline properties). Then I get the gradient and Hessian without extra programming. This makes for plug and play shape control. I am looking to extend this to subdivision surfaces and/or to work it towards mesh deformation with discrete differential geometry so I've been baking with those things in separate mini-projects.
3.) Starting the Coursera discrete optimization course. This should help with, e.g. knapsack problems on Leetcode, some structural optimization things at work, and also it seems the job market for optimization is focused on discrete/integer/combinatorial stuff presently so this may help in ways I do not foresee.
4.) C++ expression template to CUDA for physical simulation: I am periodically whittling away at this.
Build a functional to describe your ship problem, minimize it: if the solver is happy, you have a boat.... uh, or if you haven’t solved the entire problem, you have some geometry which can be stitched together with more optimization to make a boat.
More broadly, “why a boat?”
Answer: because boats have a lot of constraints, and a lot of shape ( Gaussian curvature, non rectangular topology, a need to be cheaply produced, etc etc)
So it’s a good problem to tax your generative design or design space search/optimization capability.
I like other methods of getting local control, or finer shape control of surfaces. In my stuff I've used truncated hierarchical B-splines (THB-splines), which are great for adding detail, but useless for changing topology. People speak highly of (analysis suitable) t-splines but I say they are complicated and subdivision may be better overall now anyway.
Generally speaking, I think the whole industry will have to go to subdivision. (Among friends I'd say it may carry right down to poly meshes via differential geometry but those two representations might play well together given the right tools)
For everything you ever wanted to know about a B-spline, including a C++ library implementation from scratch, highly documented and explained:
1.) Piegl and Tiller "The NURBS Book"
This includes a tiny bit of shape control via optimization.
For an explanation of the basics of B-spline nonlinear optimization with Lagrange multipliers, focuses on ships, there is a chapter here that takes you to the state of the art, circa 1995:
2.) Nowacki, et al., Computational Geometry for Ships
3.) Tony De-Rose's book "Wavelets for Computer Graphics" actually has some good scripts getting at the basics of wavelet B-splines and some facets of hierarchical parametric geometry.
The above is a start at form parameter design for B-splines. This was okay 20 years ago. It's still importatnt as a basis for understanding optimization of parameterized shape. ---Even subdivision surfaces have control points.
Generally B-splines were found not to be flexible enough for representing local details efficiently. Further, the optimization techniques still require a lot of manual setup to get things right...
The next steps are still in development:
-subdivision surfaces are a way forward for shape representation. Generally they were more problematic for computing engineering quantities of interest, especially and precisely where they "go beyond" the B-spline to allow surfaces of greater flexibility -- that is where the analysis suitability breaks down to some extent. Again, this has been patched up in the last couple of decades but still change is slow to come to the engineering industry.
I think it's well worthwhile to look at geometric optimization in computer graphics as well. See The cal-tech multi-res group, Keenan Crane at CMU (geometry collective), and tons of siggraph papers where discrete differential geometry has been leveraged to do neat things with shape. (E.g. curvature flow: https://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFl...
I think there is newer work building off this and adding more complicated constraints but I can't remember off hand. As is they have some already!)
Back to the point: you wanted optimization readings. Well it's mostly in the literature, and the literature is mostly kind of vague when it comes to parametric optimization of B-spline. Though the high points are mentioned, the detail is often hardly much better than you find in Nowacki, 1995. To this end, I have some really specific entry level PDFs that might help, and the first part of my stuff is written up in this paper: https://www.sciencedirect.com/science/article/abs/pii/S01678...
This deals mostly with curves, but has a direct extension to surfaces. Automatic differentiation really helps here! (I never published this bit on the extension to do surfaces directly (with all their attendant properties as potential constraints) as my professor said "direct surface optimization was to expensive". Looking at the discrete differential papers as of late, I tend to disagree. )
Most of the research is being done out at the Colorado school of mines by Paul Constantine. The basic idea is that you reduce your parameter space to the eigenvectors of the sensitivity matrix with the largest eigenvalues. Some of the work I have seen in constitutive modeling (and UQ) has effectively reduced parameters spaces of a couple hundred DOF to about 5-6.
Seems like that would be the (or a function of the) thing we are after in sensitivity analysis.
On the other hand, it appears that I may be able to get away with some naive assumption about this quantity, compute eigenvectors and find the active subspace... and then vary the mode in these directions.
Is this for local or global optimization?
Part of my stuff was about finding a way to guarantee that a particular set of inputs results in a feasible design. (Edit: maybe active subspace could replace this... or exclude poor regions faster)
The other part (the gradient driven part) solves curves and surfaces for shape which conforms to constraints. We really need the fine details to match as the constraints are often of equality type.
From there, it seems this active subspace method could really help in searching the design space. (From what I read, this is the purpose) A more efficient method of response surface design. My stuff is agnostic about this.
Then again, surely it could be of used in more efficiently optimizing constrained curves and surfaces... I will keep thinking but it seems a secondary use at best, or would you agree?
Active subspace comes from the uncertainty quantification community. If you assume all your parameters are Gaussian, then the sensitivity matrix is directly correlated to the probability density functions. I find it easier to think in terms of the sensitivity matrix, but useful to realize the sensitivity matrix to approximate (complex) probability distributions.
My though was that if you were optimizing have a huge parameter space theta = [theta1, ... thetam] then you could reduce the parameter space by only looking at theta_reduce = [theta | d loss/d theta > threshold] or you could look at active subspaces and change the parameter space to xi = [xi1, ... xi_m] where x_i = SUM a_j theta_j.
xi_i could be given by the largest eigenvectors of the sensitivity matrix S_ij = d^2 loss/dtheta_i dtheta_j
Wouldn't it be nice if hacker news supported latex.
I haven't done any work here, but I suspect I will be doing some of this towards the end of summer.
Yeah Colorado School of Mines! Small world, I am in the metro area. I've actually talked with a physics proff from there about helping with a project.
This would be pretty hefty (Expensive) for a mesh. I’ve used it successfully for splines where a smaller set of control points controls a surface. Mesh direct sounds expensive to me. I assume you looked at constrained mesh smoothers? (E.g. old stuff like transfinite interpolation, Laplacian smoothing, etc?). Maybe newer stuff in discrete differential geometry can extend some of those old capabilities? What is the state of the art? I have a general impression the field “went another way” but not sure what that way is.
As for the auto diff, I’ve also got a version that does reverse mode via expression trees, but the fwd mode has been fast enough so far and is very simple. Nice thing here is that overloading can be used to construct the expression tree.
Of course if you do only gradient optimization you may not need the hessian. It’s there for Newton’s method.
I use FEMAP at my day job have found Laplacian smoothing and FEMAPs other built in tools have been wanting.
I am currently thinking that my goal is to try and use reinforcement learning to build high quality meshes. In order to do that you need a loss function and if you are building a loss function you might as well wrap an optimizer around it.
FEMAP Seems a hot topic these days. Some folks at my work are building an interface to it for meshing purposes.
Simply for the experience. C++ is more in demand right now, as far as I can tell, sorry to say.
We've solved scaling and reliability (we handle 20 billion API requests a month), and we're now focusing almost all our efforts on our data quality, and new data products (like VPN detection).
We're bootstrapped, profitable, and we've got some big customers (like apple, and t-mobile), and despite being around for almost 7 years we've still barely scratched the surface on the opportunity ahead of us.
If you think you could help we're hiring - shoot me a mail - email@example.com
- We're super developer friendly - you don't even need an access token to make up to 1,000 requests per day. We have a clean / simple JSON response, and official SDKs for most popular libraries
- We have a quick, reliable API. We obsess over latency and availability, and handle over 20 billion API requests a month. (here's a technical overview of how we reduced rDNS lookups by 50x: https://blog.ipinfo.io/reducing-ipinfo-io-api-latency-50x-by...)
- We obsess over data quality. We have a data team that's constantly striving to make our data and accuracy even better than it already is.
- We're innovating. We've launched and are working on exciting new data sets and products in the IP and domain data space (VPN detection, the host.io domain API, and more).
- We care about our customers. We have people working on customer support and customer success. If you run into an issue or need help, we'll be there to answer your questions.
On the side, I'm an advisor to an impact investment foundation that is expanding their operations to East Africa. They're setting up an investment fund and accelerator programs to help companies tackle development challenges.
I'm also involved in a startup that is working to develop a new fintech app to create more data and access to credit for small-scale businesses in East Africa. It's a basic PWA app, not released yet, which has some real potential of scaling up and addressing some pretty substantial development challenges. (If anyone is really good with writing a bare-bones PWA based on Gatsby optimised for speed and low-bandwidth environments, please give me a shout).
I've had a weird career. Started out as a programmer in the late 90's, did my own startup in the mid 00's which was a half-baked success, moved to Africa for a few years and worked for the UN, moved back home and had kids, moved back to Africa and worked as a diplomat covering lots of conflicts in the Great Lakes region, moved back home again, worked for the impact foundation for a year and then rejoined diplomacy to do cyber work.
I didn't know any such norms existed. What are some of the existing agreements, and if you can talk about it, what are some of the new ones you're trying to push forward?
Your career sounds crazy...in a good way! Was your initial involvement with the UN in a technical role?
Basically, it's about trying to defend international norms from an onslaught of attempts to make states the primary defender of the informational realm, and thereby legitimise opression.
Yeah, first job for UN was coding a shitty CRUD system in order to keep track of HIV infections in East Africa.
My rationale for starting this project was that I like specific features or facilities of many individual languages, but I dislike those languages for a host of other reasons. Furthermore, I dislike those languages enough that I don't want to use them to build the projects I want to build.
I'm still at a relatively early point in the project, but it has been challenging so far. I'm implementing the compiler in Crystal, and I needed a PEG parser combinator library or a parser generator that targeted Crystal, but there wasn't a PEG parser in Crystal that supported left recursive grammar rules in a satisfactory way, so that was sub-project number 1. It took two years, I'm ashamed to say, but now I have a functioning PEG parser (with seemingly good support for left recursive grammar rules) in Crystal that I can use to implement the grammar for my language.
There is still a ton more to be done - see http://www.semanticdesigns.com/Products/DMS/LifeAfterParsing... for a list of remaining ToDos - but I'm optimistic I can make it work.
I think V is an impressive language, but it isn't quite geared toward my vision of what a language ought to be.
I am more a Rubyist than a C, Rust, or Go developer, and so my preference is for a higher level language that's a little more pleasant to use and doesn't make me think about some details that I consider "irrelevant". I'm firmly in the "sufficiently smart compiler" camp, and think that I shouldn't have to think about those low level details that only matter for the sake of performance - the compiler ought to handle that for me.
Did you use Sérgio Medeiros' algorithm for left recursion, perchance?
My takeaway from Tratt's explanation was that the general technique of growing the seed bottom-up style when in left-recursion - I think I've also seen that idea termed "recursive ascent" somewhere else but I can't place it offhand - seemed reasonable, so that's what I kept working on until I figured out something that seemed to work.
Later on, I ran across https://github.com/PhilippeSigaud/Pegged/wiki/Left-Recursion, which describes Sergio Medeiros' technique at a high level. One of the nice things I used from the Pegged project was the unit test suite. I re-implemented some of the unit tests from Pegged in my own PEG implementation and discovered that it failed at those unit tests.
It took me another number of months to figure out why my implementation failed the unit tests. I re-jiggered my implementation to make it handle the scenarios captured by those unit tests, and then naively thought "hey, it works!"...
All was well until I ran across another set of unit tests in the Autumn PEG parser (see https://github.com/norswap/autumn). My implementation failed some of those as well. After another number of months, I had a fix for those too.
Long long long story short, this process continued until I couldn't find any more unit tests that my implementation would fail, so once again I'm at the point where I think "well, I think it works".
There have been a number of occasions where I've thought "if this doesn't work, I'm just going to re-implement Pegged in Crystal!". Perhaps that's what I should've done. Ha ha! In a few months, when I find another test case that breaks my implementation, I may just do that. We'll see. I hope it doesn't come to that. Fingers crossed. :-)
- Relevant to you and your interests...
- ... but diverse enough to feed your intellectual curiosity
- Delivered in a timely fashion: apart from once a year big events, most things can wait for a few days, no need to require you to read the news every day
- Include some analysis to allow you to see the big picture
When I started a few years ago, I thought naively that a little machine learning should do the trick. But the problem is actually quite complex. In any case, the sector is ripe for disruption.
The goal is to have a system that avoids the rich-get richer effect, avoids false negatives (good content with bad rating) and in general a better correlation between votes,quality and clicks than upvoting systems.
I wrote a small simulation to test my hypotheses against HN and reddit scoring mechanisms, and it looks very promising.
Unfortunately I don't have more time to work on it...
The website is all in Dutch, but you can probably get the gist of it (I live in the NL but don't speak Dutch, but their mission is quite clear).
We're making sure journalists get the best tooling to make their work. By empowering them, we help them spend time on what's actually important: writing quality content.
Would love to exchange some ideas with you.
Your list of points is great, if you can figure out a way to deliver a service like that it would be incredible.
I think one of the biggest challenges for current publications is the tie to advertising model - advertising business model forces products to decrease in quality over time. Same thing happening to Google and Facebook, but super apparent in news sites. They're fucking awful these days, I can't read a single article without ten huge popups and a paywall.
All the points mentioned in the parent comment has been done before: magazines and newspapers. (Some) people used to subscribe to multiple publications to get their intake of information. The wide ranging, impact based news is the daily publication specialty. The newest in your specific interest is the magazines' playing field. News special reports used to be in longform and discusses all the finer points, including analysis and graphs to see the big picture. Magazines with themes serves the intellectual curiosity.
Somehow in the age of niche creators, these companies die out. I think the saying 'the sector is ripe for disruption' is true, but not in the way of software or automation. Better business model is really needed. The business model has been done before; the evolution to bite-sized factoids is the consequences changing to more heavily advertising-based business model.
The limiting factor of paper space and physical distribution seems to strike a balance: news to be printed and distributed need to be worth it for the public to pay. Maybe bundling also made it work. The specific 'small' niches in newspaper/magazines can be fulfilled by sharing the cost with the mass of subscribers.
There is a tradeoff in the wide influence of gatekeepers, but even in that time independent publications managed to survive.
I think finding this balance again is really the key. Should we go back to tax-funded publications? Or will people welcome a microtransaction for articles? Or should the publications deliver curated, less frequent summaries to make customers happy? I think the disconnect between the customers and paying for content is driving down the quality and demand (in revenue) of these publications.
The recent years have shown that subscribing to the publications themselves are not optimal. Putting up a paywall angers people, but The Guardian have never wiped their donation banner off their pages. The need to find the correct business model for publication is urgent for the masses too; democracy that actually follows the people's will depends on this.
I don't follow the current landscape, but what The Athletic is currently doing is pretty interesting for sports.
Me, personally, really like the 'The Espresso' concept from The Economist. They curate 5 stories each day and deliver it in the morning directly in the app. No space to switch tabs and disengage but space to dig deeper in the story through the links.
An ex-dentist attempted to strong arm me into a receiving an occlusal adjustment because my TMJ popped during a single visit. I knew this permanent procedure is rarely the best solution for the scenario. The dentist subsequently became irate and told me, "You'll lose all your teeth and look like an AIDs patient!" You can probably guess what era he's from.
I wanted to file a complaint, but it would've been my word against his, his assistant, and his hygienist. Absolutely ridiculous situation. It also provided a snapshot into how medical professionals exploit patient ignorance for revenue.
This eliminates so much fraud and mistakes.
I’m actually a dental student myself, and it saddens me that a significant chunk of dentists take advantage of the of the self-policing inherent to the field. It generates generalized distrust and resentment among the rest of dentists, in addition to being simply unfair.
As far as I know, there are no diagnosis codes in dentistry, just treatment codes. If it were, I imagine it could be possible to prevent this problem by randomly and routinely validating patients charts.
On a side note, it is a budding dream of mine to build a start up related to dentistry, particularly in the realm of dental informatics, but not limited to it. I was wondering if you would be willing to chat with me about your experience sometime. It sounds fascinating.
For some reason I keep hearing about people flying to Serbia to do this.
What work, generally?
EDIT: Sorry, I missed the reviews part. Do you mean easily getting a second opinion based on diagnostic imaging?
Edit: Not in US, but building planning to launch there. You can't practice dentistry in US if you haven't got US diploma. However, diagnostic dental work (at least in some states) is exception to this.
However, it's a bell curve. There are extremely moral and extremely immoral people. Some of them are dentists.
Absolutely true. However, it seems that other areas of medecine have better systems in place to prevent abuse, and dentristry would do well to follow suit.
Let's focus on the second part of that statement. It means that majority of cost of dental care goes to the practitioner, rather than to drug makers. This means they have more reason to cheat. The payoff is higher.
Research that showed the 28% figure:
Let's say you are Delta Dental, these 28% are basically an insurance fraud. If you could get rid of it, you would save billions. You could offer lower premiums and full coverage without any copays.
Started because of frequent multitasking heavy work with limited resources.
Open Beta (macOS) as soon as I finish license verification and delta updates.
I'm on Linux, so I won't be able to use your app, but great idea and good luck!
Also it'd would be a data integrity nightmare because if one context shared the apps from another. How would you manage memory corruption, and allocation and saving in this sort of scenario.
Anyway, sounds awesome.
Not really a smooth experience in my opinion, doesn't map quite as well to the concept of "working context" as I think of it.
Also, you'd have to maintain your list of users, and manually sync any settings, etc. - whereas with Cleave, I'm planning on implementing white- or blacklisting of applications on a context-basis (and system settings etc. are implicitly shared).
> Separate user accounts is kind of the naive (and not quite complete) solution to the same problem.
I too have attempted to solve this problem with user accounts; and yeah it doesn't work well. Files are a pain to share, the log-out-log-back-in process takes forever, and a bunch of preferences don't sync across user accounts.
I particularly like the idea of having a super-low-energy mode where it's just for writing or reading, and saved states for my countless research sessions. Also, being able to freeze my dev workspace and resume it any point sounds amazing.
Excited to try it out!
The basic idea, on and off for close to five years. Started out experimenting with shell session persistence (solved, but not quite), then prototyping a browser-concept and playing with browser-extensions, then settling on the OS-level...