Hacker News new | past | comments | ask | show | jobs | submit | page 2 login
Ask HN: What startup/technology is on your 'to watch' list?
976 points by iameoghan 3 days ago | hide | past | web | favorite | 661 comments
For me a couple of interesting technology products that help me in my day-to-day job

1. Hasura 2. Strapi 3. Forest Admin (super interesting although I cannot ever get it to connect to a hasura backend on Heroku ¯\_(ツ)_/¯ 4. Integromat 5. Appgyver

There are many others that I have my eye on such as NodeRed[6], but have yet to use. I do realise that these are all low-code related, however, I would be super interested in being made aware of cool other cool & upcoming tech that is making waves.

What's on your 'to watch' list?

[1]https://hasura.io/

[2]https://strapi.io/

[3]https://www.forestadmin.com/

[4]https://www.appgyver.com/

[5]https://www.integromat.com/en

[6]https://nodered.org/




Libresilicon [1]. Extremely important to our freedoms from corporate and state tyranny to make chip manufacturing libre.

> We develop a free (as in freedom, not as in free of charge) and open source semiconductor manufacturing process standard, including a full mixed signal PDK, and provide a quick, easy and inexpensive way for manufacturing. No NDAs will be required anywhere to get started, making it possible to build the designs in your basement if you wish so. We are aiming to revolutionize the market by breaking through the monopoly of proprietary closed source manufacturers!

[1] https://libresilicon.com/


This is really really exciting. Thanks

It is. They are down to 1 um, so they need to make a bit more than an order of magnitude improvement to become performance competitive - 100 nm is early 2000s technology, Raspberry Pi is 40 nm size.

1. Cloudflare Workers, I don't have the bandwidth to experiment with it right now but it interests me greatly.

https://workers.cloudflare.com/

2. Rust - definitely will be the next language I learn. Sadly the coronavirus cancelled a series of meetings in Michigan promising to give a gentle introduction to Rust.

https://www.rust-lang.org/learn


Cloudflare Workers and their KV service (https://www.cloudflare.com/products/workers-kv/) is great. I built a side project (https://tagx.io/) completely on their services from hosting the static site, auth flow and application data storage. KV is pretty cheap as well starting around $5/month with reasonable resources.

I am also very interested in using worker + kv. Can kv be used as a proper application database? Has anyone ever done that?

If by application database you mean to the level of an RDBMS then no. It's a key-value data store. You get your CRUD operations, expiry, key listing and the ability to do pagination with keys always returning in lexicographically sorted order. Any relation data you want to have would be using the key prefixes, e.g.:

  article:1 -> {"user": 1, "content": "..."}
  user:1 -> {"name": "username"}
  user:1:articles -> [1, ...]

yep. thats the plan. i will have keys like

postid:someid/text

postid:someid/author

etc.

the relation aspect doesnt daunts me. As long as I can have a list/collection as a value, i can define a working schema.

I am more worried about if its costlier than normal dbs. and if there are any other gotchas to keep in mind as kv workers have scant documentation.


If you're comparing this to a normal DB, the biggest worry should be that it's not ACID compliant. Definitely something to consider if your data is important. The limitations for KV is listed here: https://developers.cloudflare.com/workers/about/limits#kv

You should also consider how you intend to backup the data as there currently isn't a process to do that outside of writing something yourself to periodically download the keys. This will add to your usage cost depending on what your strategy is, for example, backing up old keys that gets updated vs only new keys by keeping track of the cursor.


I've seen tagx a couple of times before. Awesome to know who the author is.

I've heard many good things about Cloudfare Workers.

Excuse my ignorance & N00Bness, but are they essentially a Cloudfare version of AWS Lambdas, Google Cloud Functions and Netlify functions, or are they something different/better?


IIRC Cloudflare Workers run at each Cloudflare PoP, which have higher geographical density than AWS regions, so latency experienced by end-users may be lower.

AWS has the same thing with Lambda@Edge

According to this[0] blog post Lambda@Edge has significantly longer latency (due in part to a smaller number of locations). Cloudflare also uses V8 isolates instead of whole VMs, so much lower overhead. Disadvantage is that you can only run JavaScript and WASM.

[0]: https://blog.cloudflare.com/cloud-computing-without-containe...


Nice. Will check them out. IIRC they are really affordable too (like all serverless stuff tbh)

More lightweight. It’s just v8 so there’s basically no time for warm up time.

They have vastly more pops than Amazon, so global performance for these is on a different level. But they are also more limited in compute and serve a slightly different purpose.



I've done a couple neat (IMO) things with CF workers.

- I use imgix to manipulate images in my app, but some of my users don't want anyone to be able to discover (and steal) the source images. Imgix can't do this natively; all image manipulation instructions are in the URL. So I put a CF worker in front of imgix; my app encrypts the url, the worker decrypts it and proxies.

- A year ago, intercom.io didn't support permissions on their KB articles system. I like intercom's articles but (at the time) wanted to restrict them to actual customers. So I put a CF worker in front that gates based on a cookie set by my app.

These are both trivial, stateless 5-line scripts. I like that I can use CF workers to fundamentally change the behavior of hosted services I rely on. It's almost like being able to edit their code.

Of course, this only works for hosted services that work with custom domains.


> I like intercom's articles but (at the time) wanted to restrict them to actual customers. So I put a CF worker in front that gates based on a cookie set by my app.

Might be against their terms? I rem someone asked if they could treat Workers as a http reverse-proxy to essentially bypass restrictions, and the answer was "no".


Seems unlikely. But if they really want to lose paying customers, that would be one way of doing it.

>These are both trivial, stateless 5-line scripts

Would it be possible to share these scripts? I would love to see them, they sound really helpful/useful


Sure. Here's the help system one (no longer used since intercom now supports permissions, and I opened up the help system anyway):

    addEventListener('fetch', event => {
       event.respondWith(handleRequest(event.request))
     })

    async function handleRequest(request) {
       const cookie = request.headers.get('Cookie');
       if (cookie && cookie.includes('foo=bar')) {
         return await fetch(request);
       } else {
         return new Response('You must log in before accessing this content');
       }
     }
The encrypted URL script is actually a bit longer than "5 lines" (it has been a while) so here's a gist:

https://gist.github.com/stickfigure/af592b1ce7f888c5b8a4efbe...


It’s pretty incredible. You can put it over any site and build your own A/B testing or targeting based on the user or the link used to get to your site.

Utilized workers to create one of the fastest website analytics tool after Google Analytics: https://rapidanalytics.io (still in development).

Fast, as in what sense? The tracking code loads fast? The tracking requests are sent fast to the server? The dashboards load fast?

At my company, we run 100% of our front-end React/Vue web apps on Cloudflare workers - We love it, deploys are really easy, performance/resilience is built-in

Why rust over go?

Different use cases.

Go was designed to be easy to use for un-demanding problems. People mostly switch to Go from Ruby, Python, or Java; or from C in places where C was an unfortunate choice to begin with.

Rust is gunning for C++ territory, in places where the greater expressiveness of C++ is not needed or, in cases, not welcome. They would like to displace C (and the world would be a better place if that happened) but people still using C at this late date will typically be the last to adopt Rust.


So you’re saying it should have better performance than go?

Sometimes. Better control of performance.

Easier to search for if nothing else.

I am mainly saying it is suitable for solving harder problems than Go. Most problems are not hard; Go is good enough for them, and easier to learn and use well.

All these languages are Turing-complete. The difference is in how much work it is to design and write the program, and in whether it can satisfy performance needs.

C++ wins here by being more expressive, making it better able to implement libraries that can be used in all cases. Rust is less capable, but stronger than other mainstream languages.


1. https://www.starlink.com/ Finally, truly global and low latency satellite internet.

2. Generative models for video games - https://aidungeon.io/ is barely scratching the surface. Story, art, animation, music, gameplay, it will all be generated by models in the future.

3. New direct drive robotics actuators such as https://www.google.com/search?q=peano-hasel+actuators I think actuators are holding robotics back more than software now. Breakthroughs are needed. No general purpose robot will ever be practical with electric motors and gearboxes.

4. Self-driving cars are still happening, despite delays. I think discounting Tesla's approach is a mistake, but Waymo is still in the lead.

5. NLP is finally starting to work. The potential for automation is huge. Code generation is very exciting as well.

6. I was excited for Rust but I now believe it's too complex. I'm looking for a much simpler language that can still achieve the holy grail of memory safety without GC pauses or refcounting. But I'm not holding my breath. If ML models start writing a large part of our code then the human ergonomics of programming language design will matter less.


Jai? Jonathan Blow’s new programming language might be an option for you.

https://inductive.no/jai/


Jai doesn't do very much in terms of memory safety, Zig [1] might be a better alternative + it actually exists.

1 - https://github.com/ziglang/zig


What does Zig offers regarding memory safety? Isn't pointer manipulation as unsafe as C in Zig?

For example [1] & [2], with more being worked on. Now, Rust is king when it comes to memory safety, especially compile-time, and is miles ahead of anyone else, (not counting research languages), but Jai isn't really being designed to have much emphasis on memory safety, so am not sure it's fair to propose it as a Rust alternative if you're looking at memory safety.

1 - https://andrewkelley.me/post/unsafe-zig-safer-than-unsafe-ru...

2 - https://ziglang.org/#Performance-and-Safety-Choose-Two


Would love to read up on the advancements in NLP. Can you share some links?

Geometric algebra: https://www.youtube.com/watch?v=tX4H_ctggYo

It makes a lot of hard physics problems (Maxwell's equations, relativity theory, quantum mechanics) much more understandable and (I'm told) unifies them in a common framework. I think it will help your average developer become comfortable with these areas of physics.


The speaker in the video runs a community for people interested in geometric algebra. https://bivector.net/

Check out the demo https://observablehq.com/@enkimute/animated-orbits

Job the discord https://discord.gg/vGY6pPk


I can't wait to read into this. Switching formulas to tau was incredibly useful for me when I was doing a lot of 3D math for game dev.

Is all math/logic most fundamentally geometry?

Don't think so. Geometry requires space, which has certain features which constrain its properties (sorry for a tautology). If you avoid such constraints, you can still have math, but it doesn't make sense to call it geometry.

Looks a bit surprising math definition include concept of space. Geometry looks underappreciated, yes, but to replace the whole math...


To me, the answer here is “kinda but no”. Math in its most basic level is about studying logical connections. Sometimes being able to symbolize something in notation allows inspection of logical objects otherwise unobservable - like higher dimensional objects. But there’s the whole area of mathematical logic. I think I can say Godels incompleteness/inconsistency theorem was only about axiomatic systems with no necessary connection to geometry. Mathematics loves to study logical reductions of things and geometry can certainly be left out through reductions.

There’s something of a geometry/algebra separation in mathematics, too. The last few centuries (?) have tended toward algebraic research at the exclusion of geometry. There’s even reason to believe the two types of reasoning are separated in human brains in so far as people tend to be good at one and less good at the other.


Ah, but you can't encode math except for in some necessarily geometric form.

Are you referring to written notation? Calling that geometry is a bit of a stretch. There's also nothing geometric about maths encoded in computer code, or many types of mathematical thoughts, so I think you are just incorrect.

> Are you referring to written notation? Calling that geometry is a bit of a stretch.

Can you write without shape?

> There's also nothing geometric about maths encoded in computer code

Look at a computer chip under a microscope: nothing but geometry.

> or many types of mathematical thoughts

In re: math itself, perhaps there is such a thing as a mathematics of the formless (I doubt it but cannot rule it out) but to communicate it you are again reduced to some symbolic form.

> so I think you are just incorrect.

I've been thinking about this for a long time, and I'm still not 101% convinced, but I think it's true: you can't have information without form.

Check out "The Markable Mark" and "My Stroke of Insight". The act of distinction is the foundation of the whole of symbolic thought, and it is intrinsically a geometric act.

http://www.markability.net

> ... what is to be found in these pages is a reworking of material from the book Laws of Form.

> Think of these pages, if you like, as a study in origination; where I am thinking of 'origin' not in the historical sense but as something more like the timeless grounding of one idea on or in another.

Distinction is a physiological thing the brain does. It can e.g. be "turned off" by physical damage to the brain:

https://www.ted.com/talks/jill_bolte_taylor_my_stroke_of_ins...

https://en.wikipedia.org/wiki/My_Stroke_of_Insight

> Dr. Jill Bolte Taylor ... tells of her experience in 1996 of having a stroke in her left hemisphere and how the human brain creates our perception of reality and includes tips about how Dr. Taylor rebuilt her own brain from the inside out.

So whether you come at it from the mystical realm of pure thought or the gooey realm of living brains all math is geometric. (As far as I can tell with my gooey brain.)

Cheers!


> and it is intrinsically a geometric act.

Why? Can't you have distinction without geometry? It's not only position which can be distinct, you can have other properties.

Two digits in different position on paper can be both different - 0 and 1 - and the same - 5 and 5. You can encode them not by shape, but, say, by kind of particle?

And in general, our physical world has space - but how would you prove a world without space as we understand it can't have math?


> Why? Can't you have distinction without geometry?

Maybe but I don't see how.

> It's not only position which can be distinct, you can have other properties.

Properties like what? Color, sound, temperature, etc., all of these are geometric, no? Can you think of a concrete physical property that doesn't reduce to some kind of geometry?

> You can encode them not by shape, but, say, by kind of particle?

Sure, but then that particle must have some distinction from every other particle, either intrinsic or extrinsic (in relation to other particles), no?

Any sort of real-world distinction-making device has to have form, so that eliminates real non-geometric distinctions.

It may be possible to imagine a formless symbol but I've tried and I can't do it.

The experience of Dr. Taylor indicates to me that the brain constructs the subjective experience of symbolic distinction. (Watching her talk from an epistemological POV is really fascinating!)

So that only leaves some kind of mystic realm of formless, uh, "things". My experience has convinced me that "the formless" is both real and non-symbolic, however by the very nature of the thing I can't symbolize this knowledge.

    In the Beginning was the Void
    And the Void was without Form
If you can come up with a counter-example I would stand amazed. Cheers!

> Can you think of a concrete physical property that doesn't reduce to some kind of geometry?

How would you reduce charge to geometry? Or spin?

Can we differentiate by space the electrons in an atom of helium?

But we sort of digress. The question was if a concept of space is required to a concept of math, and specifically, if we can have distinction without space. Surely we can at least think of distinction without space, even if we'd fail to present that in our physical world?


Sidewalk delivery robots.

The problem is a lot easier that driverless cars (everything is slower and a remote human can take over in hard places) and huge potential to shake up the short-to-medium distance delivery business. It's the sort of tech that could quickly explode into 100s of cities worldwide like escooters did a couple of years ago.

Starship Technologies is the best known company in the area and furthest advanced. https://www.starship.xyz/


Yesterday, while driving in downtown Mountain View, CA, one of these damn things stopped short of coming into the crosswalk. So I and the opposite direction person stopped, like we would if it were a person.

The damn thing made us wait for what felt like an eternity. And it still didn't move. So I started to roll forward and I swear to you, I was almost hoping I would hear the crunch of electronics if the thing had decided to roll forward given I had given up on it.

A year ago I was in a Bevmo trying to get beer and an inventory robot was in the isle. The thing was bulky (morbidly obese?) and I couldn't get by it as it was rolling slowly up the isle taking pictures on both sides, so I went down the parallel isle hoping to get infront of it to get to my beer. Nope, the thing got there first and blocked me.

Robots are our future. And it will be annoying during our lifetime. There was a reason Han Solo snapped at C3PO to shut up. I don't know what Han has had to deal with in his lifetime, but I can take some guess now on where his "shoot first" mentality came from.


We've been ordering through them once every couple weeks during the pandemic. It's really cool. Even though the robot itself is really slow (takes a good 40 mins for a 1 mile journey), they're usually pretty available and responsive, and so we'd get things faster than human-based platforms (who has to be available, then go to the pick up point, then deliver).

It seems like if they get popular they're going to run into problems with sidewalk availability. We're already using them for walking. You can add a few robots and not have any blowback, but once the novelty wears off, having to navigate around slowpoke robots on your walk is going to get old.

Cities are not immutable objects. It's going to be extremely contentious, because all local politics is, but it's not infeasible to alter our cities.

Seen a bunch of these doing deliveries within Milton Keynes.

Velomobiles! A velomobile is a recumbent bike with fairing which enables them to be more convenient and much faster than a regular bike. A fit rider can easily overtake the peloton in Tour de France (https://www.youtube.com/watch?v=UBb7YIRcBe0). The velomobile in the clip is a standard model and there are racing models that are faster still!

Just like with regular bikes, you can add electric assist to them to extend their range and make the job of the rider easier. In this clip (https://www.youtube.com/watch?v=OCo4cRQMBlo) the rider gets an average speed on 37.5 km/h (top speed 84 km/h) over a distance of 73 km with over half the battery remaining. And that is without wearing the racing hoodie which significantly reduces drag.

The main problem with velomobiles is that they are expensive. The frame is made from carbon fiber and needs to be handcrafted. So the price ranges from about €5000 - €10000 which is too expensive to most. If some Chinese giant or billionaire investor set out to mass produce velomobiles I'm sure they could totally revolutionize transportation.


Optical Coherence Tomography (OCT) occupies a intermediate position in accuracy / skin depth for soft tissue between ultrasound and MRI

Optically pumped magnetometers (OPM) approaches SQUID level accuracy without need for supercooled device, can be worn or used as a contact sensor like ultrasound.

LoRA long range (10km +) low power sub-gigahertz radio frequency protocol useful for battery powered IoT devices transmitting small amounts of data.

Heat cabinet for infectious diseases, an old technology used to fight polio and other diseases that went out of favor with introduction of antibiotics. May find utility against novel viral infections.

UV light treatment of blood. Another old technology that may find use against novel infectious agents. Stimulates immune system to fight viral infections.


Oh man, I used to research in OCT for deep brain stimulation! It's pretty cool tech, that is for sure. It's got a huge market for bio applications and certain industrials.

That said, optics is a super finicky field. You can come in and get a Nobel for 5 hours work, or you can spend 50 years in a dark room trying to get things together. Alignment is crazy difficult, thought it seems it shouldn't be.

Anyone that wants to dive into optics: Just do it for 2 years, no more.


Alignment should be done system-wide by orders of magnitude. If you are on a breadboard, get everything to within a cm of final location, then everything within a mm, etc. Don’t ever spend more than 1 minute at a time on any component. This stuff was not in the textbooks.

It's especially hard with OCT as it's in the IR spectrum. You just have to go on your meters alone. It takes forever.

Tell me more about getting a Nobel in 5 hours. I’ll buy your ebook :)

Meaning, if you don't get there in two years, you won't?

Neat list. Thanks.

I have chronic graft-versus host disease. Side effect of a bone marrow transplant. Mostly effects my skin, which changes color, gets thinner, and in advances stages hardens (aka marbleization).

GVHD is wicked hard to diagnose, monitor. Skin biopsies and normal digital photos.

I've asked my misc doctors (at FHCRC, SCCA, Univ of Wash) over the years about using UV to better diagnose skin conditions.

Now I'm wondering if OCT could also be helpful, perhaps for assessing the scarring.


OCT is a recognized diagnostic modality in dermatology and worth discussing with your doctors. Here are some references:

https://www.mdedge.com/dermatology/article/146053/melanoma/o...

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5946785/

http://www.octnews.org/category/5/dermatology/

and here is a clinic that talks about using it to assess skin: https://dermnetnz.org/topics/optical-coherence-tomography/


> UV light treatment of blood.

What? No... No don't do this. This is a discarded idea from the era before molecular biology, and it was discarded for very good reason.


Opportunity can come from ideas that are correct but not generally accepted as correct.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4783265/

If it were to work it would be a useful new modality. I am not promoting it, but it's on my "watch list" due to efforts by AYTU at Cedars Sinai.


In my experience the commonest current mainstream use of this is for Sezary Syndrome / mycosis fungoides (cutaneous T-cell lymphoma). See for example:

https://my.clevelandclinic.org/health/articles/8493-mycosis-...


Opportunity to give people leukemia. These ideas are at a prehistoric level of biology. We're way beyond this silliness now.

I am talking about work being done in clinical trials at reputable medical clinics. They may be mistaken but I don't think it's "silliness." Here is a recent clinical trial evaluating UVBI https://www.tandfonline.com/doi/full/10.1080/2331205X.2019.1...

Of course there are many other mainstream treatments that came from somewhat oddball ideas: Sister Kenney's treatments for polio, the Nobel prize winning discovery by Barry Marshall and Robin Warren that ulcers were caused by bacteria (H. Pylori), the use of leaches for treatment of venous congestion after surgery, and the use of maggots for wound debridement.


It should never have been signed off on by an IRB. It is irresponsible and horrific that this has been trialed on people in this century.

I suspect this may be a case of "the dose makes the poison."

You may be generalizing from a specific experience or specific experiment and rejecting a modality that may have significant efficacy.

It's hard to tell what you are basing your assertions on because you offer no specifics. My "watch list" interest is based on the number of positive experimental results and ongoing investigations of the technique.


Any specific objections to the research links provided?

GPGPU. GPU performance is still increasing along Moore's Law, single-core performance has plateaued. The implication is that at some point, the differential will become so great that we'll be stupid to continue running anything other than simple housekeeping tasks on the CPU. There's a lot of capital investment that'd need to happen for that transition - we basically need to throw out much of what we've learned about algorithm design over the past 50 years and learn new parallel algorithms - but that's part of what makes it exciting.

Sounds interesting, what language is best positioned for GPGPU's?

C++ through CUDA is by far the most popular option. There is some support in other languages but the support and ecosystem is far from what exists for CUDA and c++.

Python via RAPIDS.ai . There first bc most data science community for prod + scale is in it. It feels like the early days Hadoop and Spark.

IMO golang and JS are both better technical fits (go for parallel concurrency and js for concurrency/V8/typed arrays/wasm), and we got close via Apache arrow libs, but will be a year or two more for them as a core supporter is needed and we had to stop the JS side after we wrote arrow. Python side is exploding so now just a matter of time.


Zig. There's a Why Zig When There is Already CPP, D, and Rust? writeup at https://github.com/ziglang/zig/wiki/Why-Zig-When-There-is-Al...

seriously, zig is so amazing. if all zig was was https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace... it would be unbelievable, but it's so much more than that.

I came here to say the same thing: Zig. The design decisions are spot on.

For example, Modeling Data Concurrency w/ Asynchronous I/O in Zig by Andrew Kelley: https://t.co/VYNqNcrkH1?amp=1



>D has @property functions, which are methods that you call with what looks like field access, so in the above example, c.d might call a function. Rust has operator overloading, so the + operator might call a function.

But I love properties and operator overloading


You just made learning Zig on my todo list. Thanks :D

I wrote a guide on connecting Hasura + Forest admin for no-code SaaS apps + admin backends:

"14. Connect Forest Admin to Hasura & Postgres"

http://hasura-forest-admin.surge.sh/#/?id=_14-connect-forest...

For Heroku specifically you need to make sure that the client attempting to connect does it over SSL, so set SSL mode if possible (many clients will do this by-default).

To get the connection string for pasting into Forest Admin config, run this:

    heroku config | grep HEROKU_POSTGRESQL
That should give you a connection string you can copy + paste to access externally from Heroku:

    HEROKU_POSTGRESQL_YELLOW_URL: postgres://user3123:passkja83kd8@ec2-117-21-174-214.compute-1.amazonaws.com:6212/db982398
https://devcenter.heroku.com/articles/heroku-postgresql#exte...

Awesome, will definitely. I think it was your post in a different thread earlier this year where I came across it originally. I remembered the name as you helped me on the Hasura Discord (Thank you for all your awesome input there) & it looks so promising.

Thank you very much for this article, that's awesome!

Oh snap, the founder of Forest Admin!

Glad you liked the post, I've been using Forest on both realworld SaaS platforms and small side-startups since early 2017. Really cool to watch how much you've evolved since then.

Also, Louis S. is amazing! I've sent two emails to you guys over the years, Louis answered both of them within a day.

Throwback to 2017 UI ;)

https://i.imgur.com/KT9Wtlx.png


This comment is epic, thank you very much :-) I'm sure Louis will be super happy to read this as well.

See you soon!


1. Starship - https://www.spacex.com/vehicles/starship/

Completely reusable rocket that can carry 100 tons into low Earth orbit, refuel, and then continue on to places like Mars. Launches are estimated to cost SpaceX about $2M, compared to the SLS $1B (estimated, similar lift capability) and space shuttle $1B (27 tons). The engines are real, test vehicles are flying (another test launch is likely to happen in the next week or two). Follow the SpaceX subreddit for progress updates

2. Commonwealth Fusion Systems - http://cfs.energy

Lots of reactors have struggled with scale and plasma instability. CFS has adopted a design using new REBCO high temperature superconductor magnets that are stronger and smaller, which can be used to build smaller reactors and better stabilize the plasma. They are building a prototype called SPARC, expected to produce net energy gain by 2025.


I’m a little surprised that there aren’t any mentions of Obsidian, while there are at least two mentions of Roam. To all Roam lovers, and to all intellectuals in general, I’d recommend you to check out Obsidian [1] from the makers of Dynalist.

It’s also a tool made mainly for Zettelkasten, but it is offline and local by default. It’s not an outliner like Roam, but rather a free-form text editor.

I feel that Obsidian’s values align more closely with the values of a general HN reader. For example, the files (Zettels?) are plain markdown files, so the portability is much higher than what is the case with Roam (which is online only, and your data is somewhere in a database in a proprietary format).

Another example would be the support for plugins, which are first-class citizens (although the API is yet undocumented) — many of the core features are implemented as plugins and can be turned off.

And there’s a Discord channel where you can discuss with the devs, which are very responsive — so much so that I’m surprised they can rollout new features so quickly (at least one feature update per week, from my limited experience with Obsidian).

(Not affiliated in any way, just a happy user. I copied most of this comment from another comment of mine)

[1]: https://obsidian.md/


  I’m a little surprised that there aren’t any mentions of Obsidian, while there are at least two mentions of Roam. To all Roam lovers, and to all intellectuals in general, I’d recommend you to check out Obsidian [1] from the makers of Dynalist.

  It’s also a tool made mainly for Zettelkasten
Just so you know, I consider myself probably a fairly typical HN user. Got my own little daily tech concerns, but keep a toe in the water of the larger zeitgeist. I have no idea what anything you just said means. You could be talking about brands of car or my little ponies for all I know. Googling it - seems it's something to do with notes?

Just remember that not everyone is in your little concern-bubble, and one or two explanatory sentences would be very welcome.


I wrote an article about Luhmann's Zettelkasten if you are interested: https://emvi.com/blog/luhmanns-zettelkasten-a-productivity-t...

I had to stop somewhere with the explanations. I was mainly addressing people already familiar with Roam, and also decided that Zettelkasten as a term is quite easily googleable. It’s true I could have slipped a few words there, along the lines of “...which is a note-taking technique” — I’ll make sure to do that next time.

the other thing is https://roamresearch.com/

It's a text based wiki or outliner (collapsible text) thought a step further, with auto backlinks etc.

Feels to me like a weaker org-mode, online with better cross-links/embeds (something that is indeed uncool in org. things can't live in two places at once)


There is org-roam, and it's getting better by the day: https://github.com/org-roam/org-roam

Yea, have to try that some time.

I'll need to switch from one biiiig file to multiple than though. I think my biggest hindrance is setting up the refile targeting :)


Gonna check this out, seems very useful!

Subvocal recognition: https://en.wikipedia.org/wiki/Subvocal_recognition Imagine how much more people would use voice input if they could do it silently.

Also neural interfaces like CTRL-labs was building before being acquired. Imagine if you could navigate and type at full speed while standing on the subway.

I think that rich, high fidelity inputs like those are going to be key to ambient computing really taking off.


Been wanting subvocalization since reading the Ender series

- Far UVC lights (200 to ~222nm) such as Ushio's Care222 tech. This light destroys pathogens quickly while not seeming to damage human skin or eyes.

- FPGAs. I'm no computer engineer, but it seems like this tech is going to soon drastically increase our compute.

- Augur, among other prediction platforms. Beliefs will pay rent.

- Web Assembly, as noted elsewhere. One use case I haven't read yet here is distributed computing. BOINC via WASM could facilitate dozens more users to join the network.

- Decision-making software, particularly that which leverages random variable inputs and uses Monte Carlo methods, and helps elicit the most accurate predictions and preferences of the user.


I'm an FPGA engineer and I doubt they will go mainstream. They work great for prototyping, low-volume production, or products that need flexibility in features, but they are hard to use (unlikely to get better in my opinion) and it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

That said, they are very cool! And learning to create FPGA designs teaches you a lot about how processors and other low level stuff works.


>it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

I see them going mainstream when brain computer interfaces go mainstream (prob a long way away) since a lot of it (in my experience working in a couple of labs and some related hardware) depends on processing a lot of the data from the sensors, of which most is thrown away due to the sheer volume, and transferring it back and being able to update the filtration matrices easily tailored to sampled data.


Fpgas are too expensive, power hungry, and large. We use them for many tasks at my workplace and we are spinning up an ASIC team because using fpgas just doesn't meet our power and size requirements. Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.

> Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.

I don't doubt it, yet I found hard to describe the human brain over time, especially across people, as that; at least from a DSP and beamforimg of impedance measurements from the scalp to gauge the relative output of power at variable regions in the brain perspective.


> Far UVC lights (200 to ~222nm)

OK, these are not safe wavelengths, and whatever you're reading is not right. This is absolutely ionizing radiation. The rate of formation of thymine dimers in this regime is similar to that around 260 nm. That is, it causes DNA damage. Please see Figure 8 below:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1751-1097....

The logic of the claim that you can destroy a pathogen with UV but not cause damage to human tissues is incongruous. If it kills the pathogen, it also causes radiation damage to human tissues as well. One cannot dissociate these because they are caused by the same photoionization mechanism.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5552051/

> We have previously shown that 207-nm ultraviolet (UV) light has similar antimicrobial properties as typical germicidal UV light (254 nm), but without inducing mammalian skin damage. The biophysical rationale is based on the limited penetration distance of 207-nm light in biological samples (e.g. stratum corneum) compared with that of 254-nm light. Here we extended our previous studies to 222-nm light and tested the hypothesis that there exists a narrow wavelength window in the far-UVC region, from around 200–222 nm, which is significantly harmful to bacteria, but without damaging cells in tissues.

> As predicted by biophysical considerations and in agreement with our previous findings, far-UVC light in the range of 200–222 nm kills bacteria efficiently regardless of their drug-resistant proficiency, but without the skin damaging effects associated with conventional germicidal UV exposure.


So if I'm reading correctly, the 207-nm ultraviolet light simply doesn't make it past the outer (dead) layer of skin.

Correct, but I’d still like to see their data as to what the impact is to eye tissue.

FPGAs have been around for quite awhile. Is something changing?

Non-stupid open toolchains are slowly happening. Vendor toolchains are the biggest thing holding back FPGAs. Everyone hates them, they're slow, huge, and annoying to use.

One thing that is changing quickly: deep learning, particularly inference on the edge. FPGAs are more versatile than ASICs.

Everyone making ML ASICs would disagree.

This just provides a cost advantage though right? I mean that’s great, love me some margin, but it’s not really a new frontier. Unless I’m wrong?

Dozens!

https://luna-lang.org - a dual-representation (visual & textual) programming language

RISC-V

Zig programming language

Nim programming language

(also some stuff mentioned by others, like WASM, Rust, Nix/NixOS)


> luna-lang

Whoa... had to do a double take there.

Great to see luna seems to be alive yet again - now "enso lang" per github [i]. A git commit just days ago... so here's hoping! It is such a great concept.

[i] https://github.com/luna/ide


Nix—It takes buy-in, but very worth it for us. Builds are reliable, reproducible, can exist alongside one another. Plug an S3 powered CloudFront cache into it, and you’re never building the same thing more than once.

Deno—Sandboxed by default seems a powerful way to offer our customers to run custom code. Native TypeScript, build single bins. I have still to play around with it but those all seem like compelling advantages over Node.js


Crazy idea time. Is anyone piping randomly generated code into nix and selecting for AI in the output? (I’m pretty far out of my realm here so sorry in advance)

The search space of possible code is unfathomably enormous. I think you'd have better luck generating amazing art with random colours for each pixel (i.e. still none).

People have done more limited genetic programming for a long time now (essentially randomly mutating formulas, keeping ones that do better), but neural networks are doing the arbitrary function-fitting better at the moment.

What does it have to do with Nix, though?


Bleary eyed me thought nix sounded well defined enough to make searching the input space more tractable.

Optogenetics [0]. Light changes electrical behavior in cells. AKA, point laser, neurons fire, I know kung-fu

Memristors [1] Rebuilding the entire computer from EE basics. New 'color' added to EE spectrum, now computers process huge datasets on a watch battery

CRISPR-CaS9 [2] Tricks bacteria used to keeps viruses out are pretty slick. Crtl-C, Ctrl-V for gene editing. $100B technology, easily.

Strangely (encouragingly?) all these words are 'misspelled'

NOTE: I am massively oversimplifying things.

[0] https://en.wikipedia.org/wiki/Optogenetics

[1] https://en.wikipedia.org/wiki/Memristor

[2] https://en.wikipedia.org/wiki/CRISPR


I was excited for memristors in 2008 when HP announced they were right around the corner. They even built a new computer and a new OS to take advantage of memristors [1]. And then it never happened and no one has ever built one and it’s pretty much vaporware. I would be hesitant to trust anyone who says they’re anywhere close. It’s just not a technology that actually exists.

[1] https://www.extremetech.com/extreme/207897-hp-kills-the-mach...


> They even built a new computer and a new OS to take advantage of memristors

I could be wrong but I think I remember reading somewhere they ran into patent infringement issues that they couldn't get around or something like that.


CRISPR-CaS9 seems to be pretty much "done" already. It's already used by many labs and high-profile projects, has transformed upcoming gene therapy pipelines and the main problems are being ironed out. I don't think that there is any doubt anymore that CRISPR is a big milestone in biotech.

VR. It seems just about ready, but still a little too expensive.

While good games are obviously already there, I'm more curious about work. Would an infinite desktop with an appropriate interface beat the reliable old 2x24" screen setup I have? I think it could.


I've had so many moments in VR where I could glimpse the future, I'm definitely bullish. The problems seem incrementally solvable - display resolution portability and comfort seem like they are easy enough to solve with time, and better/higher fidelity inputs.

A big thing with it as well I think will be focus, I'd love to be able to entirely shut out the world while working on something for 90 minutes or so.

This is one where I think it'll get to be good enough outside of niche gaming and just take off - my prediction is it'll take about 6 more years (i.e. 3 more release cycles) before the hardware is past the post.


Not just for games. Imagine VR tourism, meetings... or even escapism. Especially these days. Last time I used one the graphics isn't as immersive yet. I don't mind tethering it to a powerful computer. But image quality is a must.

I enjoy VR games but none of those sound attractive to me or at least with head-seats and controllers.

Interactive virtual worlds are wondrous but not actually terrible practical for traditional tasks. Something that is perfect though is training for roles involving a lot of awareness and physical interaction in rare and extreme environments e.g. emergency workers, police, soldiers etc.


I think it's at an interesting tipping point.

Products like the Quest are crossing the threshold to where it's affordable, completely self-contained and high quality enough to provide a great true VR experience. They need to about halve the cost while maintaining quality and if they can do that then there is no reason why this shouldn't explode into the market place.


Nobody wants to wear a VR headset all day

> While good games are obviously already there, I'm more curious about work.

Good games are most definitely not there. The consensus is that Alyx is really the only worthwhile VR title. Just about everything else is gimmicky and trite. VR still has a long way to go.


I play Eleven Table Tennis (as the table tennis clubs were closed due covid). That game is the best simulation game you can play today. The physics are very close to reality, so close that in-game and IRL skills are immediately transferable. The biggest issues of the game that I encounter are not with the game itself, but with the tracking limitations of the Rift S inside-out tracking.

I'm in no position to judge having played none, but I've heard glowing reviews for more than just Alyx but didn't commit any titles to memory since I'm not planning to be an early adopter.

However, some do admit current VR is heavily carried by the novelty of using your hands (much like the Wii's motion controls made many average games enjoyable while it was fresh).


Skyrim and Fallout 4 are breathtaking in VR.

Zig Programming language.

Because it's basically C-+, It's extremely easy to use, and also extremely easy to parse, so (and maybe this is self-serving because I did it) I see a lot of automated code-generation tools to hook Zig into other PLs.

C's age is starting to show (and it's becoming a problem), and I think it has the opportunity to be sort of a place for old C heads like me to go to stay relevant, modern, safe, and productive.


https://www.sens.org : Solving the problem of aging and diseases of aging. Watch a few interviews of Aubrey de Grey to get a better idea of the possibilities of their research. Though this would come under the "to watch" not for the immediate future but for the next decade or two.

One thing that's not clear to me is the advantage of living longer. Why do some people feel the need to live longer?

When I hear of blood transfusion and such, it also feels like a lot of these technologies are being developed by snakeoil salespeople to other nongullible but strictly egocentric humans.


One thing with aging research is it's also about healthier living - for example, letting people be healthier well into old age, even if the number of years is the same. In terms of why people want to live longer, I think it's just human nature at the end of the day - more time to do the things you enjoy and help the people you care about.

Practically speaking, there are a lot of advantages to longer lives:

- Generally, it means people will be healthier, which means reduced societal burden.

- A deeper family structure can mean better education and childcare. You have more time to be with your friends and family, more time to pursue hobbies, more time to explore the world.

- Scientists, engineers and researchers can spend more time building and leveraging their expertise. If people live 20% longer for example, I suspect there's more than a 20% increase in advancement because in so many fields breakthroughs happen near the end of your career.

Of course, there will be consequences that need to be addressed.

- Does this only lead to increased inequity, where the wealthy are able to accumulate wealth and knowledge even more easily due to access to anti aging. Already there's a 14 year difference in life expectancy between the richest and poorest 1%, imagine if that was 50 years.

- How do we adjust our social safety nets when people are living to 100 instead of 80?

- How does this change over-population and over-consumption?


I see your point. I think there is an angle there. However, still feels like a rich person's game here: not sure if the average or below average conditions will improve (childhood mortality, adult mortality) in developing countries because of this.

That's really where the "average" advantage seems to be: if life becomes better for everyone then they live longer.


All the new products around WireGuard. I'm so tired of running VPNs. NAT traversal with protocols like STUN, TURN and ICE are going to allow point-to-point networks for all the clients.

https://tailscale.com/

https://portal.cloud/app/subspace


Zerotier.com did this years ago and it works great.

I'm kinda sad for you - I've been using and advocating zerotier for a while (it's amazing and indispensable)...but in my circles the word 'wireguard' has got people excited, which (anecdotally) is benefitting tailscale and generating more hype around them than zerotier ever got. Hopefully a rising tide will lift all ships and you find a way to capitalise on it :)

(I prefer device-based zerotier-style access rather than login-based tailscale-style so that does sway me to zerotier...but I have to admit tailscale looks more polished, e.g. the screenshot of seeing other devices on the network. I get it's not a fundamental feature! But I can't help but appreciate it)


We are doing fine and V2 is coming soon with a ton of improvements. I just have to occasionally point out our existence again.

The pulldown showing other devices on a network does look spiffy but that wont scale. We have users with thousands of devices on a virtual LAN and the protocol will scale far larger. Not only will that not fit in a menu but any system that relies on a master list will fall down. That list and refreshing it will get huge.

We are doing the tech first, spiff second.


Some feedback. I think I stumbled upon Zerotier a while back and didn't really get what it is. IIRC it felt like something that is only useful for big companies, exactly what I felt today.

I think the website could do a better job showcasing how it's used.

Hope my feedback is helpful and wish all the best!


Our web site kind of sucks. We're going to be working with a design/marketing firm to re-do it soon.

It's kind of hard to explain ZeroTier sometimes. Its so simple (to the user) people have a hard time getting it.

"You just make a network and connect stuff." Huh?

People have been conditioned to think networking is hard because 90% of networking software is crap.


UI issues for sure, but the product is great. I have computers in 3 different organizations and the ability to tie them into a coherent virtual site so they can all talk to each other is amazing. I no longer have to worry about having forgotten some file on my home network that I needed at the university, for instance. Looking forward to V2!

Thanks for ZeroTier! Managed to convert a few friends from using Hamachi for LAN games, which was always a pain to setup previously. It simply just works for my needs.

ooh! so it could replace Hamachi. I think this is one use case (without using the product name) that can be listed in a uses-cases page. Hope other Zerotier users would chime in with more use cases.

ZeroTier emulates a L2 Ethernet switch over any network, so anything you can do with Ethernet basically.

You make networks, add stuff to them, and any protocol you want just works: games, ssh, sftp, http, drive mounts (though they can be slow over the Internet), video chat, VoIP, even stuff like BGP or old protocols like IPX work.


Really happy to hear it's all going well, and I've been excited about V2 since I read the blog post about it - your product is awesome and solves a genuine need, and I really want you to succeed.

So many interesting links. This post should be a regular on HN.

Especially now. Ive been using HN for years and before covid I could see all the posts from the day before in 20 mins. Now it will take 2-3 hours. Everyone is sharing.

A friend of mine is working on coscout. It's in beta right now, but he showed me some pretty insane machine learning based insights for companies, investors and founders.

Things like

- When will this company raise the next round? - What is the net worth of <literally anyone>? - What is the probability that this investor will invest in you? (given your sector, founder age, pedigree, gender, market conditions, do you have an MVP or not etc.) - A bunch of other complicated stuff I didn't really understand

Definitely worth keeping an eye on if you're into this kinda stuff: https://coscout.com/


Feedback loops in this sort of thing always scare me. For example - say people of one demographic are less likely to fund raise, so the model says they're less likely to succeed, so investors using the model don't invest in them and they are put at an even further disadvantage. And so something that is inherently data driven can end up moving further away from the meritocratic ideal it's likely trying for.

And the thing is, it's hard to get this bias out of models - almost everything ends up correlating to age, race, gender and so on - zip codes, income, schools, past employers, etc.


Agreed. It's definitely up to the user to make smart decisions.

However, it's not so cut and dry either. In my last company (B2C mobile app based), we were pretty much getting beat by several competitors. And it showed across all metrics, ratings/reviews/downloads/web traffic/retention/engagement - what have you.

And later on we found out that the founders had been fudging up the metrics and presenting to investors which is why they actually were never able to raise the round, but came very close. By straight up lying.

If some form of business/product intelligence is used to identify such red flags, it can save a lot of bad decisions and heartache from ever happening for everyone involved.

In that regard, I welcome more empirical evidence based decision making (aka statistics/machine learning etc.) where it's appropriate.


Is there any way I can get access to this? The product seems intriguing and I can be a paid customer.

They said they're in the final stages of testing right now, they'll first roll out slowly to customers that they're already working with, then the waitlist and then do a wider release for everyone by the end of next month.

Best bet would be to sign up on the waitlist for now https://coscout.com/dashboard


I guess I'm much more conservative than other folks, but I think we've scratched only 10% of the surface of the benefits that things like Kubernetes, Consul, Vault and Terraform should/will provide.

So they're on the list. I feel like at my job I'm pushing at the edges (as far as running large scale, stable production) and we've still got miles left.

Also Bazel.

I guess this is a boring answer.


>I think we've scratched only 10% of the surface of the benefits that things like Kubernetes, Consul, Vault and Terraform should/will provide.

What benefits are we not seeing?

So many apps over-engineer their scalability.


What benefits aren't we seeing?

End-to-end automation still isn't done in most places and it's considered hard.

Having made a significant automation investment, I can say that it's easier after you've put some of the work in. It trends towards easiness but up front cant seem insurmountably hard.

Caveat: Our infrastructure bill is stretching about ten million yearly and our team is small (avg. about 1M being managed per person), size your expectations appropriately.


Was also going to post Bazel.

Here's the best 'elevator pitch' for Bazel that I know of. 3 minutes from Oscar Boykin about why Bazel (or at least a Bazel-like system) is the future.

https://youtu.be/t_Omlhh7IJc?t=40


Terraform is a game changer and easy to learn

Happy to be told otherwise, but I think that Juju is the only tool in that space that understands inter-connected applications and to spin up services that span k8s/VMs/clouds and work together.

Is there anything like "terraform provider for bare metal"? Would be soo convenient to just go from full nuke and pave to functional dev machine with a single config repo.


Terraform alone just does infra provisioning, but it can call a script for application setup.

pxe boot?

Roam Research https://roamresearch.com/

A tool for networked thought that has been an effective "Second Brain" for me.

I'm writing way more than ever through daily notes and the bi-directly linking of notes enables me to build smarter connections between notes and structure my thoughts in a way that helps me take more action and build stronger ideas over time.


I’d recommend you to check out Obsidian [1] from the makers of Dynalist. It’s also a tool made mainly for Zettelkasten, but it is offline and local by default. It’s not an outliner like Roam, but rather a free-form text editor.

I feel that Obsidian’s values align more closely with the values of a general HN reader. For example, the files (Zettels?) are plain markdown files, so the portability is much higher than what is the case with Roam (which is online only, and your data is somewhere in a database in a proprietary format).

Another example would be the support for plugins, which are first-class citizens (although the API is yet undocumented) — many of the core features are implemented as plugins and can be turned off.

And there’s a Discord channel where you can discuss with the devs, which are very responsive — so much so that I’m surprised they can rollout new features so quickly (at least one feature update per week, from my limited experience with Obsidian).

(Not affiliated in any way, just a happy user)

[1]: https://obsidian.md/


I've had good experiences with personal Wikis before, but have fallen back to plain notes. I think notetaking by itself is immensely powerful and underappreciated in general (wish I had started earlier), and all that's necessary is building a habit out of it. Maybe this can give it a little extra spice (hopefully not as cumbersome as a full blown personal website).

I can recommend this video [1] from the author of How to Take Smart Notes. The whole Zettelkasten is a great idea, and he explained it succinctly in that talk. He also compares the status quo methods of note taking with the Zettelkasten, which for me was very eye-opening

[1]: https://vimeo.com/275530205


Notational Velocity [0] seems to be something very similar, if not the exact same, except it's a macOS app and not a web app.

[0] http://notational.net


thanks! Longtime user of nvAlt and I never noticed that.

FYI: the nvAlt developer is working on a new version: https://brettterpstra.com/2019/04/10/codename-nvultra/

How's this different from hypertext? (I genuinely don't know)

Not sure about the specific features of hypertext, but in general: bi-directional linking, block references (Roam is an outliner like Dynalist or Workflowy), block transclusion, graph view of your page network...

Of course, you could throw a bunch of scripts together to approximate these features — but you don’t have to, since Roam (and Obsidian and others) exists.


Good shout - that's been on my watch list for a while now. Thanks for the reminder!

The hype on Twitter can get a bit annoying - but Roam is seriously awesome.

This is just tiddlywiki, no?

Considering that Tiddlywiki has around 4 plugins that are supposed to make it more like Roam, I’d say that probably Roam isn’t just like TiddlyWiki.

Now, I’m not a TW user, but I think things like block references, outliner features, and bi-directional linking aren’t there by default.


Svelte is my go-to for personal projects. My speculative hunch is it will begin to rival React within the next 5 years for its simplicity and thus cost reductions.

There are a lot of advantages, but this 5min video comparing React hooks to Svelte should be enough to trigger interest: https://www.youtube.com/watch?v=YtD2mWDQnxM


I’m really curious how my 100kloc enterprise app would work in Svelte, but my hunch is it just wouldn’t be possible to build.

Svelte always seems really cool in these toy examples, but I want to see a significant app built with it instead.


Yeah that's a common myth. Svelte isn't some fringe alternative JS framework. It performs exactly the same functions and same structure as any React/Vue app, but does so with far less code and runs far faster since it's a compiler.

You're not going to find many brand-name companies using it because the PM decision-makers at large enterprises are always going to be many years behind and going with the "safe" JS framework leader at the time.

Well-known companies currently using Svelte: Apple [1], New York Times [2], Spotify [3] and GoDaddy [4]

1: https://twitter.com/mansur_ashraf/status/1204542852581273600 2: Svelte creator Rich Harris works for them 3: https://www.reddit.com/r/sveltejs/comments/f18n33/companies_... 4: https://svelte.dev


"as any React/Vue app" From my experience, and what I heard from others, a lot of devs prefer React over Vue even though Vue syntax is cleaner and allows for shorter code. React just feels more robust when the app grows large enough compared to Vue. Note that we're not talking only about the library itself and its syntax, but also the ecosystem and support around it.

That being said, I think Svelte -> Vue comparison might be even more imbalanced than Vue -> React.


Solar energy, carbon dioxide and water directly to butanol. In other words, store solar energy directly as fuel. There are other versions that generate hydrogen, but that has a much lower energy density than liquid fuel.

Just modify your existing ICE to run on butanol and you're good to go. <a bit of hand waving there.>

See https://www.intelligentliving.co/biofuel-solar-energy-co2-wa... for where we were a year ago.


Caddy, specifically v2 (https://caddyserver.com/v2)

I've been using Caddy v2 all through beta/RC and glad it's finally stable with a proper release. I moved away from using nginx for serving static content for my side projects and prototypes. I'm also starting to replace reverse proxying from HAProxy as well. The lightweight config and the automated TLS with Let's Encrypt makes everything a breeze. Definitely has become part of my day-to-day.


Robotics + Deep Learning - I think we just quietly passed a milestone where robots using deep learning can perform useful real world tasks (selecting items, packing boxes etc)

If true we could be at a watershed for robotics adoption and progress as large scale deployments generate the data to train on more complex tasks, leading to more deployments and so snowballing onwards

This seems like a much more likely process that will lead to a type of “general AI” than researchers pushing us all the way there

Covariant AI (and their partnerships) is what got me thinking: https://covariant.ai/


Ubuntu, ParrotOS, Kali

Julia Lang is fun

For devops, Pulumi/CDK

I watch graph dbs but they all suck or are too expensive or have insane licenses (Neo4j, RedisGraph)

Differentiable programming, Zygote, Jax, PyTorch, TF/Keras

Optimal transport (sliced fused gromov-wasserstein)

AGI, levin, solomonoff, hutter, schmidhuber, friston

Ramda is amazing

George Church’s publications

im also super interested in molecular programming

DEAP is fun for tree GP a la Koza

Judea Pearl’s work will change the world once folks grok it

Secure multiparty computation


I looked into pulumi last week, and it seems cool but I think they need to rework their library design to avoid fracturing their ecosystem for each language (or just standardize on one language).

As a optimal transport lover working on differential programming, I approve this message. :)

I am particularly interested in food products that will replace animal-based foods. There will be a major consumer shift in the upcoming decades as consumers shift to more sustainable alternatives. This will changes industries, towns and regions.

Self hosting - https://cloudron.io

A while back I installed ServerPilot which automatically sets up Nginx/Apache/PHP/MySQL for you. It also handles security updates. This made those $5 VPS' so much more appealing [1] as I could install lots of small Node.js apps on a single server, and avoid managed hosting providers who seem to prefer charging per app instance.

Anyway ServerPilot then scrapped their free plan so I've been looking for an alternative. cloudron looks cool, I don't see anything specific to Node.js/Express, but it does have a LAMP stack which includes Apache, so I might try that. Otherwise I'll probably use something like CapRover [2], a self-hosted platform as a service.

[1] https://twitter.com/umaar/status/1256155563748139009

[2] https://caprover.com/


Dokku is an excellent option for this sort of thing, and manages subdomains for you.

http://dokku.viewdocs.io/dokku/


And SSL is a cinch! I have been very happy with Dokku, I'm surprised I don't see it mentioned around here more often.

Would love to get your opinion as I'm building a competing product to ServerPilot in this space. Is the $5 too expensive for the service? or is it just too expensive because the billing increases as you have more servers under management, and they charge you per app as well?

Are there features ServerPilot is missing that would justify the price more for you? Some examples might be monitoring, analytics, automated security patching, containerization of workloads, etc.

Would the plan be more appealing if the cost of the plan, the portal, and the VM hosting itself were all rolled into one? (i.e. you would just pay one company, rather than having to sign up for DO as well as ServerPilot).


1) Independence of hosting provider is a must. Don't want to be forced to use your VPS service when I have all my infrastructure already on Linode, DO, Vultr, etc.

2) Should be free when used in non-commercial applications. Multiple servers included.

3) Keep the common and already available typical configurations free: lamp, lemp, python, letsencrypt, email. Charge for things which no other panel free or otherwise typically supports. lightspeed, go, caddy, load balancing, sql replication, graphql, etc. Thats value.


"Self-hosting apps is time consuming and error-prone. Keeping your system up-to-date and secure is a full-time job. Cloudron lets you focus on using the apps and not worry about system administration."

neat, don't think I've seen something like this before!


It kind of just looks like a simplified version of CPanel which has been on every VPS for the last 20+ years.

"simplified version of CPanel" is something neat that I haven't seen before

in addition, sometimes people don't know things that you know, and you would do well to keep that in mind: https://xkcd.com/1053/


They'd be so much more successful, if "Install" button did not have this:

  wget https://cloudron.io/cloudron-setup
  chmod +x ./cloudron-setup
  ./cloudron-setup --provider [digitalocean,ec2,generic,ovh,...]

There's a cloudron 1-click image on Digital Ocean

Which, as a regular user, I don't understand when I see it.

Hell, I am a dev, and I still did not know that will let me create one quickly.


Agreed. Do you have any suggestions to improve the initial onboarding?

Ahoy gramakri.

I have been a Cloudron user for a bit of time. Recently I have launched a company and we're now a paying and very happy customer of Cloudron's business subscription.

It seems that the "next app suggestion" process have stalled. To me as an outsider of your internal process, I cannot see what applications are being preferred over others. There are tons of very good suggestions which are not receiving traction it seems, from the app suggestion-forum.

A few examples which Cloudron needs, and would benefit from having attracting more users:

- A Wireguard VPN frontend application

- Jupyter Notebooks Environment

- Odoo ERP Community edition

- Erpnext


Ideally:

- user picks a cloud (or have a "Advanced" option on the next step instead)

- you show them OpenID/OAuth form for their cloud provider

- guide them through the creation of an account if necessary

- you get the token, that permits your server to create cloud resources on behalf of the user

- you go ahead and create their services for them

- potentially store the token to be able to update the apps automatically

I thought about that, when I was considering to make a similar service (also similar to sandstorm.io). Glad to see somebody doing something in that area (I guess without the permissions model yet).

Problem is: most clouds don't let you easily create an account, so "guide them through the creation of an account" might be impossible without leaving the browser.


Is that any less secure than “sudo dpkg -i foo.deb”?

It is certainly less secure, than just calling the API of those cloud providers directly from the site backend.

The subscription price is crazy now. And they don't even do hosting.

> And they don't even do hosting.

I'm pretty sure that's their whole point of existence.


Combining statistics-based AI with GOFAI to create systems that can both recognize objects in a sea of data and reason about what they saw.

The MiSTer FPGA-based hardware platform.

RISC-V is gonna change everything. Yeah, RISC-V is good.


How do you combine statistics-based AI with GOFAI?

GOFAI basically consists of inference and reasoning techniques, some of which cease to work well when you scale them up too much (computational complexity) or when there is uncertainty involved. There have been some efforts to scale reasoning towards greater scale (description logics) as well as problems with uncertainty (ILP, Markov Logic), but they've been de-emphasized or forgotten in recent times because you get a lot of mileage out of end-to-end deep learning - where essentially hidden state within the network deals with the uncertainty on its own, and where the additional compute overhead + rule engineering effort doesn't seem warranted.

Wrote a blog post on the technologies I believe are going to change industries

1. No-Code Tools 2. GraphQL 3. Oculus Quest 4. FPGA 5. Netflix for Gamers 6. Windows for Linux 7. Notion 8. Animation Engines

https://jay.kanakiya.in/blog/what-i-am-excited-about-in-2020...


FPGA is still too cumbersome to make it big. It's too expensive for general appliances, talent is hard to find, and development process is still stuck where software was 20 years ago. FPGA vendors are still trying roll out their own everything-included non-standard solutions. Those solutions don't scale well. I've seen engineers struggling to trace where some signal ends up, it's complete insanity.

I find GPUs conceptually similar to FPGAs for most soft applications (video processing and similar number crunching). They also provide huge number of re-purposable blocks for programmable parallel computing. GPUs have won out because they became mainstream through gaming and they more readily opened up to general software practices and methodologies. It's no surprise machine learning community is avoiding FPGAs for most part.


Agreed, FPGA is too expensive currently with a non-standard toolset present across the industry. But if someone is able to create an industry coalition (think Wi-Fi Alliance or Bluetooth SIG) it can definitely make a large impact for everyone involved with several companies reaping the benefits

Great post, thanks.

Darklang: https://darklang.com/

I've tried out an early version of their product, and I really like where they're headed.


I’d love to try it if they didn’t tie the language to their hosting service. I understand the necessity of the coupling but until someone can start a competing hosting company with the same language, it’s not something I’m interested in.

Huh, Dark looks pretty similar to what I'm doing, albeit significantly more work since they went and developed a whole language and editor. If you're not averse to Clojure, give this a look: https://github.com/acgollapalli/dataworks

Totally reasonable. I feel the same way, but there are a lot of people who just want something up and running and are happy to accept vendor lock-in risks. I'm sure they'll get there eventually.

I agree here. It's really exciting to have a platform where the friction of releases is totally removed. I'm excited to see where they end up with this product.

Solid state batteries.

The tech is tantalizingly close, although not perfected yet. If and when they become available, these batteries will have a far higher energy density and degrade at a far lower rate than existing batteries.


I agree. Is there any chance this is what Tesla’s battery day could include?

Cloudflare workers. It was on my watch list at the beginning of the year and I’m just about to out a 20k page “static” (with tons of interactivity) site into production them.

Using it is an API gateway and Kv store for truly static assets is amazing.


The general trend toward returning computing to the edge, which is just starting and has been accelerated due to COVID forcing BeyondCorp type practices on us.

Cognitive radio, ultra wide spread spectrum, and other radio tech that promises great range and license free or minimal licensing operation due to lack of interference with other stuff.

Rust is the first serious C/C++ replacement contender IMHO.

RISC-V and other commodity open source silicon.

Cheaper faster ASIC production making custom silicon with 100X+ performance gains feasible for more problems.

Zero knowledge proofs, homomorphic crypto, etc.


> The general trend toward returning computing to the edge

By "the edge", do you mean users' devices, or just more local data centers a la Cloudflare?


I mean users' devices and to a lesser extent things like federated infrastructure that's "closer to the user" socially speaking.

It's a trend in the earliest stages, sort of like cloud was in the early 2000s.


Gaze tracking. I've used the dedicated gaze tracking sensors from Tobii and it's really natural and responsive. I think we're going to see a lot of touchless interaction become popular in the post-covid world.

I agree. While there are natural limits to how precise the eye is for interactions (eyes naturally flicker back and forth) I definitely also feel like there's potential here. I did a university project on combining voice and gaze tracking for programming - and while gaze is good for e.g. selecting a region of the screen, it's hard to click small things with it.

How accurate are those sensors? I've often thought how nice it would be to get rid of the mouse and use sensors to figure out where exactly on my screen I'm looking.

Mouse-level accuracy requires a well-calibrated setup and the correct size monitor and sitting posture. If you want to do something like a Square POS checkout and need to distinguish a random visitor looking at 4 buttons, it would be pretty forgiving.

Thank you, so not yet an option for me; I doubt my posture and sitting position is regular enough that calibration would work for me.

Might, at some point, be a welcome addition to a touch pad though. If you touch the pad and your pointer is far away to where you're looking, jump to the area and do the fine tuning with the fingers.


I already do that without the need of a touch pad, just use your numpad keys as a mouse if you're on windows look into mouse.ahk. Tobii will snap mouse to where you're looking at and senses mouse is moving. Works great when you want to stay on home row, selecting text with it though, not as good.

zksnarks https://blog.ethereum.org/2016/12/05/zksnarks-in-a-nutshell/

Essentially let’s you verify computation is accurate without doing the computation yourself, and even treating the computation as a black box so you don’t know what is computed. Many applications in privacy, but also for outsourced computation.


One important weakness of zkSnarks is that it requires a trusted setup, for example [1]. A new alternative is called zk-STARK [2], which doesn't require the trusted setup, and is post-quantum secure. However, it significantly increases the size of the proof (around ~50KB). In general, hash-based post-quantum algorithms require bigger size and it would be interesting to watch the progress made in this regard.

[1]https://filecoin.io/blog/participate-in-our-trusted-setup-ce...

[2] https://eprint.iacr.org/2018/046.pdf


If I'm not mistaken, I believe they're found a way for a trustless setup a few months ago. Unfortunately I don't have any more info on hand, but I remember reading that in passing in regards to research performed by Ethereum developers.

I'm not up on the math, but https://electriccoin.co/blog/halo-recursive-proof-compositio... sounded like that sort of thing.


Looks like the ScyllaDB playbook i.e. rewrite a popular Java app in C++ and sell it as a much faster product.

Going to be interesting to see if they survive as the pace of JVM improvements has been rapidly increasing in the last year or so.


thanks, though what we sell is operational simplicity. speed is nice, but not the main benefit. a single binary that's easy to run is what CIO seem to be interested. though we are young. fingers crossed it works :)

I agree that operational simplicity will sell this to more organizations than performance will. There just aren't that many companies in the world that are bumping up against the scaling limits of Kafka.

When I look at the landing page of vectorized.io it touts the speed repeatedly without mentioning this simplicity pitch you find deeper in the site:

Redpanda is a single binary you can scp to your servers and be done. We got rid of Zookeeper®, the JVM and garbage collection for a streamlined, predictable system.

That does sound great! Put that information right up front.


Thank you! Will do! <3

Tailscale - riding on the wireguard wave - https://tailscale.com/

Also Wireguard - https://www.wireguard.com/


> Tailscale

I'd recommend this alternative that doesn't require a 3rd party - which is one of the reasons to implement wireguard over a traditional VPN:

https://github.com/subspacecommunity/subspace


I am sure I wouldn't use a service that can literally get into each and every device of my private network if they want to or worse, they get hacked. Each and every device in the network automatically accepts whatever public keys and endpoints that get advertised by their servers and automatically connect to them. It's not only an overpriced mediocre product. From a security perspective, it's the most dangerous SaaS service I've ever seen.

My biggest fear is once this company gets tied to WireGuard and the security disasters come out, WireGuard's fate will be tied to a mediocre commercial product that put money above engineering decisions.


https://immersedvr.com/

Virtual monitors in an Oculus Quest that actually works. What’s coming up that will be amazing is hand controls (including a virtual keyboard) and conferencing and collaboration tools.


I'm going to try this out. I assumed the resolution of the Quest wasn't quite there to make coding in a virtual desktop comfortable. How has your experience been?

I use it every day. With wifi-direct there is zero lag.

I work with 3 1440x900 virtual screens. It’s more than enough for coding and the convenience of multiple screens for free offsets the low resolution.


Did you try increasing internal texture resolution? It makes text crispier. The framerate will drop, but it's tolerable for this use case. I find it very useful. There are various resolutions supported, this is the highest where increase in quality should be most noticeable:

$ adb shell setprop debug.oculus.textureWidth 2048 && adb shell setprop debug.oculus.textureHeight 2048

You have to start application after this is executed. To go back to original, you can reboot device or run this:

$ adb shell setprop debug.oculus.textureWidth 1280 && adb shell setprop debug.oculus.textureHeight 720


Couldn’t figure out how to add virtual monitors

Are you on Mac? Virtual monitors are only available on Mac for now but are rolling out for windows and Linux soon. You can also use headless monitor plugs in the meantime.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: