Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What “missing” technical solution(s) do you wish existed?
71 points by webmaven on Dec 2, 2016 | hide | past | web | favorite | 212 comments
This could be just about anything, from "project A but written in language B", "an API or library for real world activity C", "a unified solution for both D and E", all the way to "An open source version of proprietary solution Z", etc.

Go wild, dream big. You never know, someone might respond with a link to such a project that already exists... or decide to start one!

A peer to peer encrypted datastore that everyone can use to store their own metadata.

Facebook knows what movies I like and interests I have, Spotify knows what songs I listen to, Amazon knows what books I like to read. All of these pieces of data are currently silo'd and controlled by private corporations many of which can do whatever they like with that data.

I want a private encrypted datastore that does this instead. Peer to peer so nobody controls it or can kill it. And controlled by you so Government can't get access. You set up a node and contribute CPU power and disk space. Then you get a private address and key and can push small pieces of metadata to it. This data is mirrored and very redundant so it won't disappear if some nodes go offline.

Then any site can integrate with this store. People will build libraries to write to it with various languages and others will build websites which make it easy for the layman to store and get data from it. People could create online "metadata banks" so that less technical people don't have to run their own node. Integrating with this datastore will eventually be as easy as integrating with current oauth services.

Having one global system not controlled by anyone also means there's now a standardized way for any service to request access to your data. They don't have to specifically integrate with Facebook, Spotify, Bandcamp, Last.fm etc, they just have one integration point and you can choose what data you want to share with them.

And in theory it will live on forever so as future services grow and die your data doesn't die with it.

I won't pretend to exactly understand Urbit, but it seems to be more or less what you're describing:

> In an Urbit world, your data is no longer trapped in a jumble of proprietary servers. Your urbit is a permanent, versioned, typed archive the size of your digital life. Even before you move your data from a Web service to a local Urbit app, your urbit can drive your account with an API or scraper.


That seems like a nice use case for ipfs: https://ipfs.io/

As of now there is no automatic mirroring, because nodes only store the data they request, but in the future one will use filecoin (http://filecoin.io/) to incentivize mirroring.

Interesting. Major problem: Either existing services need to shift gears to support this third-party that reduces their control over data (when often, your data is how they make their money), or else new equivalent services using said third-party need to be made and meaningfully dislodge them (difficult, inertia is a thing).

These services already have apis that give up the data. So anyone could build data extractors that allow you to regularly pull your Metadata from them and store it in this new datastore.

Only the consumers of the data need to change their behavior. If there's easy to use libraries to access the data store and enough demand for it then more and more consumers will adopt it over time.

Sounds like something you could build on top of Retroshare:


If you can export the meta data you want from Facebook et al then you should be able to find a way to share it in an encrypted peer-to-peer network.

This is an idea that'll start gaining traction over the next few years. I was thinking of building something like this out in go backed by storj.io just the other day. We're also starting to see others like the aforementioned keybase filesystem, ipfs, maidsafe, etc.

I like this but how do you make sure that Facebook gives all the data that they collect directly from the user back into the system?

Perhaps an audit system which compares data shown to the user on Facebook to what the system knows Facebook contributed?

...and phone number and physical address and all of the other "real-life" metadata that is commonly defined but disparately stored/leveraged/shared.

Sounds like this?


Great idea. I was just thinking about this the other day and I too would be interested in something like this.

I want to slot my phone into a dock connected to three high resolution monitors and a keyboard/mouse and be able to do development work.

Essentially, when it's in my hand it's in Phone Mode, when it's in the dock it's in Desktop PC mode.

We are getting there with tablets. You can do this with the Surface for example thanks to Display Link technology. I just want the device to be phone sized, around 5.5" or so with a good battery life when operating as a phone, a decent processor, a decent amount of Ram (8GB at least), and about 500GB of SSD storage.

With a Surface or equivalent tablet, you still need to carry around a bag with you to carry it in which is a pain. If that could be small enough to fit in your pocket that would be ideal!

Does the development environment have to be housed entirely on the phone itself? I ask because phones are too underpowered for most dev work. But, cloud technology is getting to the point where you can play AAA video games remotely with low latency using a modest computer as a client.

To extend on what you have, I think it would be really cool to have a phone app that brings up a remotely-hosted triple monitor dev environment when it's docked. The actual dev server could be anywhere and all you would need to access it, both at home and at work, is the phone. Kind of like a visual tmux.

edited: removed "which is why the many attempts at making desktop docks for phones have failed". There are many reasons they have failed, but not catering to devs probably isn't one of them. :)

Cloud is problematic because of cost of bandwidth and storage compared to local solutions. Until bandwidth is actually plentiful, uncapped, and cheap, I would rather have a local solution.

We're talking about a setup that's already plugged into the wall (triple monitors, keyboard, mouse). Just have the phone use Wifi.

I'll admit, having a self contained setup would be nicer, but that's not going to happen until batteries get much denser or phones get a lot more efficient.

I'm not sure why what I plug it into cannot have its own computing and storage power. There are wired caps on data too.

In fact, why the heck did Apple sell a display and an iMac? Why not insert a processor board (a blade) into the display to turn it into an iMac? Why was the Mac Pro anything more than a bunch of slots for blades?

I want to carry a small display, some storage, and computing power and be able to dock it with more storage, display, and computing power. I really think we need to examine the PC in the post-PC world. Given the prices, I'm fine with the cloud being my backup, not work environment.

Hmm. That's an interesting idea. How would the phone and the PC work together, though? I could see it work if they were all the same architecture. Then you're just adding more ram and cores.

Actually, the ram could be an issue. For example, say you're running a video editing program that's using 15gb of ram and you want to undock. You'd need some way of storing the state of the program and the ram its using when it's not docked. Either that or you'd be forced to shut it down completely.

They wouldn't need to be the same architecture, but use a common means of communicating for coordinating tasks.

There's some precedent in the design of the Transputer plus daughterboards. Borrowing from the latter to handle the heterogenous nature of phone/PC pairings.

Making it hot-swappable would, of course, be a challenge. Using physical mechanisms to hold the device means that boards/phones/whatevers could be ejected safely only once various conditions (state saving, rerouting of data, etc.) were met.

Storage was a solved issue with the Newton Soups in the 90's, but other systems won. Memory could work the same way. Run the programs that required the high ram on the machine I docked to (using local dock storage replicating what I need from my phone) and when I unplug save the state. Perhaps back it up to network in that state in case I plug into a different dock next time.

I was so excited when Microsoft demoed their Display Dock[1] for the Lumia 950s. It was the first step towards what you describe, a phone that becomes a desktop case when connected to the necessary peripherals.

I am also saddened that it seems like Microsoft is leaving mobile so it's unlikely we'll see a next iteration of this.


[1] https://www.microsoft.com/en/mobile/accessory/hd-500/

Microsoft might be leaving the mobile hardware business, but they're not the only ones making Windows Phone hardware. The HP Elite x3 just came out with this same feature. You can use the "Connect" app to pull up a remote desktop from any Win10 PC, or put it into a special dock that plugs into a normal keyboard mouse and monitor, or connect to a wireless monitor over Miracast.

Have you seen Andromium as an early take on this concept?


That's very cool!

Windows phones already do this, less development tools. Its cool because e.g. Word mobile has the desktop and phone UI in the same package and presents itself appropriately.

I would be surprised if, with more competitive Intel chips or x86 on ARM emulation (the latter of this is in the works afaik), this didn't happen sooner or later.

Microsoft showcased something similar - https://www.microsoft.com/en-us/mobile/accessory/hd-500/ using Windows Phones

I don't want VC funding, I want Microsoft to go in this direction. Their phone OS was great, I generally like their tablets, and most desktop software works already on Windows. If anyone from Microsoft is listening I want them to do this.

Yep, I am genuinely excited to see where Microsoft goes with this.

Do you mean like Windows Continuum? It is only ARM, but I've heard rumors of an x86 Surface Phone sometime next year.


Reminds me of the Switch. I mean, that's gaming instead of dev, but it's a pretty similar concept.

I wonder if Nintendo will sell docks separately from the system for that purpose? I would hope so.

And let us install whatever OS we want on that hw.

Motorolla had this thing about 5 years ago: http://allthingsd.com/20110216/motorola-atrix-android-phone-...

It sounds like it didn't perform that great but with the more modern phone hardware and possibly some hardware in the dock to help with coprocessing (graphics card, more RAM, etc), you could build something quite useful. With the advent of USB Type C, docking stations for phones could definitely work. Gigabyte has even showed off a Thunderbolt/Type C external graphics card. I can definitely see souped up docking stations coming out within the next year or so. I could also see laptop versions of this coming out soon too for users that want a more portable version.

So, you want Ubuntu convergence?

I had the same urge to carry a small, capable device to make phone call and do development work. I don't need much RAM or storage as long as it's enough to rotate to one project sandbox. The closest I found so far is: http://www.solu.co

What you want is nearly possible: Intel's current generation of Compute Sticks (CS325 and CS525) are the size of a USB stick but have 2 cores, 4GB of RAM, and 64 GB of disk space. Granted, you can't use it on the go but it is ultra portable.

I don't understand. If you already are planning on using it with 3 monitors, what is the advantage of making it so portable? Why not just leave a slightly larger device where the monitors are and use the small device as storage / boot disk?

I guess.

I just like the idea that, if I am contracting in a couple of different places, have one of these setups in each one, rock up with my phone and boom!

I can see the appeal, but it still seems way easier just to put a cheap computer in eah location with equivalent compute power to a phone and carry around the SSD.

Ubuntu phone

1. A language that synthesizes the advantages of Bash, Node, and ML... Basically an expressive yet simple type system, nice syntax, interactive environment, fast compilation, the ability to run as a script in Unix, first-class support for pipelines, files and sockets, and built in stuff for dealing with HTTP, JSON, regex, etc.

2. An event sourcing database that's not a gargantuan Java EE thing, something like the Redis or SQLite of event sourcing. I just want to store a bunch of events and build indexes in the form of update functions. And then easily access the cached indexes and live event streams through HTTP.

3. Something like Emacs but built on a React-like screen update model and engineered to be fast on something like a Raspberry Pi. We have a common room computer in our collective house that we use as a chat, todo, clock, etc, and right now it runs Emacs in xterm which is pretty good but the programming model is somewhat antiquated. (I tried making a React app but Chromium on the Raspberry Pi 2 is laggy, and besides Emacs is much nicer as an interactive environment.)

4. A tiny laptop with no trackpad and an e-ink display.

5. A scriptable bank account and a common metadata format for bills and invoices.

6. A low cost global roaming SIM card and a flourishing web of minimal bandwidth text services.

7. An accurate ebook parser generating semantic markup and a nice variety of innovative reading interfaces, for example showing one paragraph at a time, some kind of interactive mind mapping reader, etc.

8. A collaborative text editor that works more like a chat than a word processor.

> a common metadata format for bills and invoices

Such a format does exist: https://en.wikipedia.org/wiki/EDIFACT

It's just hardly anyone uses it. It's got some adoption with large companies such as auto manufacturers that mandate their suppliers to use it but it's far from wide-spread (and certainly not with SMBs or consumer-oriented offerings).

There also is MT940 (https://en.wikipedia.org/wiki/MT940 ) which in theory can be used for exchanging and scripting bank accounting data. The format is a terrible mess though with each bank implementing it slightly differently (if they do offer it at all).

Then there is the Incoterms ruleset (https://en.wikipedia.org/wiki/Incoterms ), which is commonly used in supply chain management.

If you have time, would you like to discuss some of these ideas?

I wish that webcams on laptops had open-source hardware and also a hardware switch, which connects and disconnects the webcam from the laptop (and thanks to open source hardware you can be sure, that the switch has no "back-door"). Turining that switch off and on would be much easier than putting the duct tape off and on all the time.

Similar hardware switch would be useful for cameras and microphones in smartphones. You can never be sure, what your iOS, or Windows Phone, or some modified Android OS is actualy doing.

I really miss the laptops that had physical switches and the little sliding camera covers.

I'd even settle for a good case that just covers/muffles all of that.

especially microphones are a problem - you can always sticky tape the camera but you can not easily reversibly incapacitate the microphone

Especially since they don't even have to be surface mounted. It'd really be nice to be able to trust devices.

Sure you can. Think out of the box. A small speaker that sticks over the microphone opening and constantly plays a recording with two voices overlaid, one reading the Gettysburg address and the other reading the declaration of independence, constitution and amendments. You would have to be shouting to be intelligible over top of that.

Even better, think of how a microphone works. Its just a magnet and wire, right?

Could you place a small magnet over the mic to essentially lock it in place? no movement == nothing to record.

I'd like to see a "local cloud" computing system. Where CPUs/GPUs are stored in a rack on site and at each workstation is only a KVM + monitor plus a high bandwidth (say fiber optic) video link back to the cabinet.

This would be almost a regression back to mainframe days, but it could give everyone the power of a high-end workstation as well as save energy since most scenarios wont' have a 100% load factor. An office of 20 people probably never actually needs 20x (1CPU 1hdd, etc). You could buy one rack with 5 beefy workstations and have a much more efficient load factor. As long as you could keep the video link up, the users should never feel any lag.

edit: you'd also need a video-card multiplexer, so for the 20 worker scenario, you'd have one board with 5-10 gpus multiplexed to 20 fiber optic outputs (doesn't have to be fiber, but something high enough to give you 4k @ 60fps )

Never thought about this architecture from an energy savings perspective but it's actually probably really good. Another thing is USB over ethernet is a now a thing in the Linux kernel... so even with pluggable hardware this could make perfect sense. Interesting idea.

I could also see this in a suburban neighborhood. Instead of buying "internet" from your ISP, you could buy "computing" from them. Every block has a server cabinet, with fiber video links to each house. Then your ISP is responsible for maintaining your OS.

This could go either way: given the current state of American ISP's, it could be a disaster. But in a "perfect world", it would "just work" because as soon as your OS went down, the tech would be dispached to remote in and fix it, or drive out there and replace a blade in the server. In aggregate, this would drive up demand and evolve a highly reliable OS. In that case everyone should have a highly reliable service (albeit with privacy concerns), as long as the video could keep up.

Not keen on this idea... I think individual independence in computing hardware is a very important social and technological feature against fascist technocracy. I think sharing infrastructure at the office level could make sense, or within a home, but not across a neighborhood with current technology.

While you may be right that overall availability would increase under such a design, I don't really think reliability is a problem for most people right now.

Maybe it would work in some special scenarios though? For example it may be a viable model to provide 'basic service' computing to individual homes within a low income / government supported housing environment, prisons, or hotels. (Fun fact: In mainland Chinese hotels, they often provide computers specifically so that people can watch porn.)

Isn't this just thin clients?

I suppose. But I'd say take it further and capitalize on advancements in VM technology so as to make it indistinguishable in performance from an attached workstation.

I have a really bad memory, but the part that affects me most is inability to remember conversations, even fairly recent ones (like the day before). Just zero recall of it ever happening. I'd love a way to quickly and unobtrusively log 'talked with Able and Baker about X Y and Z at such and such time and place'. Right now all I can think of is typing it into my phone or writing in a notepad.

But having such a solution, that I could review to improve recall, or search if necessary, would be lovely. Something quick and unobtrusive.

Kind of like the rememberance agents the early wearable computing people used? http://alumni.media.mit.edu/~rhodes/Papers/wear-ra.html

Something like that would be wonderful, yeah. Far as I can tell, it's not a real existing thing, right? Just an idea that there was a prototype of?

I was actually building something like that for myself.

Would you use it if you had to manually log it?

Yeah. Having to manually log to /some/ extent is actually my expectation; it just has to be simpler than typing out a whole log entry. I don't want to type out

"dec 02 2016 Hung out with Alex and Sammy, talked about Star Wars and whether Darth Maul or Qui-Gon would win in a fistfight, Sammy was backing Darth Maul and won the argument"

I don't have any ideas to improve that in any way, I don't know if it CAN be improved without some element of literal mind-reading, I just know that typing out the above is inadequate, it's too clunky, time-consuming, and conspicuous.

What about a Speech-to-text solution? Don't have to take the time to pull out your notebook or type on your phone's screen. Just tap a button on a headset or tell GNow to add a log entry, and speak the entry.

Communication; the next best thing to reading minds!

Interesting. I initially thought people might be comfortable typing like that and letting the app parse it all out. Thought it might be more natural than filling out a form.

What would you expect or be willing to time out instead?

In the age of Single Page Applications that work almost without any backend I am missing a BaaS - Backend as a Service, that would handle users (sign up, sign in, forgot password), their data (save some user generated data, load user generated data) and payment processing at the same time. Right now you could try to combine Kinvey with Stripe or Cognito with PayPal but it is not easy at all.

Full disclosure: IBM Cloud Identity Services is my current employer / project.

There is a concept of IDaaS - Identity as a Service that covers the first part of your request (sign up / identity lifecycle, authentication / single sign on, user self-service for password resets, etc...).

This is one such offering, there's a good overview of the features here: http://www-03.ibm.com/security/cloud/cloud-identity-service/

Also, I recommend looking into the IBM BlueMix cloud platform: https://console.ng.bluemix.net/catalog/

I think things are heading towards what you're describing, but picture it more as a catalog of skeleton backend apps built on top of pieces like you called out, as a starting point for different types of projects.

Firebase for everything but payments.

Can you talk about some of the challenges integrating kinvey and stripe? Seems like if you can setup a webhook on kinvey for stripe to call, that's about all you'd need?


so it seems userapp.io is not very active, is the project dead?

that seems very promissing


You know how the iPod is now this tiny little clip on chip with earphones? I want a phone like that. No app, no "smart this or that" - just a phone, with an contact list. The thing can to voice calls and SMS ONLY. It's a real cellular phone, but without all the crap, and a monthly use cost of under $20. Make that, and you'll see a revolution.

What kind of revolution are you expecting? Loads of people already have smartphones and I don't see them switching, and you can already get a flip phone for < $10.

This point is a Trump/Brexit type phone - none of the built up cruft and corruption of a "smart phone". If is just a phone. The "revolution" will be the massive loss of revenue the popularity of such a device would bring, which consumers could use as a purchase weapon to say "no more spy hardware!"

like a burner t-mobile flip phone from Walgreens?

That's what I was thinking. Maybe they're looking for smaller, like that awesome 'gag' flip-phone in Zoolander.

And that "kit" is a smart-ish type phone, which cost over 2x what I said the price should be for the final product. The point is a non-smart device as a form of protest against the ever-present spy hardware everything is becoming.

Source-specific IP Multicast support on the open internet, which probably needs some serious magic to make it feasible on core routers, if it is solvable at all.

Some shabby server in someone's basement could stream data to millions. CDNs would no longer be needed.

There is a great fear of Multicast from many network engineers I talk to. This comes from at its core, it being too easy to create cycles. At the internet scale the potential for cycles would be even higher, b/c the TTL needed would have to be on the order of ~20-30 hops, and many engineers will just set this number too high.

Now I don't have experience with Source-specific Multicast, which is supposed to resolve the cycle problem, but I think there remains a lot of fear of Multicast in the industry.

Can anyone else comment on cycle issues?

Also, in terms of scaling, Multicast requires of copy of the packet on the router to every port which is registered as a listener to a Multicast address, this will increase load on the larger routers.

About scalability, a possible compromise might be making some distance-based limitations. I.e. limit subscriptions per prefix or adjust packet drop rates.

That way regional SSM can still be useful and local islands (which whould hopefully grow over time) could be bridged by unicast links or - if they have overlap - by leapfrogging.

Even at the ISP level there would be still be use-cases for SSM.

That sounds a lot like taking the D out of DDoS.

Isn't this how multicast IP television works anyways? With IGMP and PIM?

Multicast on top of MPLS isn't really a thing with respect to the backbone routers.

I'd love to bring banking to the 21th century. (Ability to create virtual cards, scale bank accounts based on needs/usage, easy access to loans etc.). And all this would have to work internationally (go anywhere in the world...2 people have a smartphone? Great, this system works). Give people the ability to save, invest and help them along that way and give them more power about their money.

Also the ability to get the entire system to work electronically offline.

So we'd need advancements in not just technology (tbh, the tech would be the easy bit), but laws and ideas/conceptions people have with banks/monitory systems in general.

Currently a digital nomad in the need of a bic/swift able bank account with a debit card attached and at best a proper online interface.

I really dont understand how this shit is so hard.

like I said, because of a lot of reason (some personal, some unfortunate incidents), I've been looking for the above solution for over 5 years and have found none so far. :(

Sadly it's never the technology, it's the laws that are the issue (and I can see why they are necessary too, so it's just in limbo)

Yeah absolutely. Were it just about the technology i am sure we could have fixed that long ago :/

>scale bank accounts based on needs/usage<

Can you give an example of this?

well, right now, if you go to a bank (in US/CA), you have to choose between a few plans (if you will) based on transactions and usage and you get charged based on those plans.

However...your usage is generally changes and so should your plan. (for example, i personally have a lot of transactions happening in April/May) and then it generally drops to a few every month and to avoid paying those transaction charges I have to always hold around $5000 base balance.

While this might not be an issue for me (I have a comfortable income where the $5k does not affect me as much, and my spending habits are not excessive either), there are a lot of people for whom, 5k (or even the $1k/m min amount is a lot) and these people who need money the most, usually fare the worst. I find that unfair (similar to buying in bulk).

I want a technology that reads and transmits blood, hormone and micro-biome information so that I can optimise nutrition, rest and training. I'd accept some form of implant but bonus points for the least invasive solution.

Medical lab technologiats are in an ideal place to start delivering this sort of information. If information and tissue/food/etc could be condensed on to small disposable chips, it could convey a lot of information.

Get sick in the stomach? Put a sample of your food on small chips before you eat and now we know that Chinese restaurant you ate at had high levels of E coli.

A browser on Linux that just works and doesn't use up all my RAM.

Basically like Chromium a year ago? before they removed support for Hangouts and I had to switch to Chrome. Which crashes all the time, usually when using Hangouts.

Whats the problem with chrome? With a adblock (so not loading masses of ads) i dont have any issue with it. RAM is cheap these days as well.

What's broken about Hangouts? I just tried it in Chromium for Windows and it seems to work OK.

Late edit: This is a serious question. I can't find any information about Chromium dropping support for Hangouts. What happened?

>Go wild, dream big.

I want a DNS replacement that allows you to "subscribe" to TLDs, each with their own policies and practices. Domains are identified by a tuple of a public key and the domain name, so that if someone wants to share a domain, the browser can let them select the host they want. It should have 3 (or more) distinct modes of operation: Gossip, where popular portions of the zone are continually shared between nodes; P2P, where queries traverse a graph of peers, rather than a tree; and a Classic mode, which functions as DNS does.

I want a standard way for ALL OSes (incl. Mobile) to share notifications. I can't even get my phone and smartwatch to share all the notifications I need, and they're from the same manufacturer!

I want a protocol and software to combine the paradigms of Syncthing and FTP, so you could download a folder's contents (or a subset thereof), or keep it up to date, all with syncthing's discovery and authentication framework.

I want a platform where every action is entirely scriptable in something that's not Javascript. Get an IRC message while I'm not there? Send the text to program A. New USB device inserted? Launch a virus scanner there. It would be exceedingly helpful if you could tie in to the browser and define scripting events on 3rd party pages.

Finally, I wish Plan 9 was a viable desktop OS because its protocols and paradigms line up very nicely with some of these. Plus, it's a really neat, simple OS.

> I want a protocol and software to combine the paradigms of Syncthing and FTP, so you could download a folder's contents (or a subset thereof), or keep it up to date, all with syncthing's discovery and authentication framework.

Could you explain this syncthing desire more?

I developed SyncthingFUSE [0] for something that possibly sounds similar to your desire. I've let the project languish, but planning to pick it back up.

[0] https://github.com/burkemw3/syncthingfuse

I agree with most of these, but why not JS? (would TS or Swift be more along your lines?)

Want to know why people don't like / dismiss JS considering it's one of the easiest / quickest to pick and go furtherest with.

I quite like the Lua scriptability of awesomewm but I'm not sure how you'd tie it in to browser events (browser plugin?)

> I want a platform where every action is entirely scriptable in something that's not Javascript.

So...PHP? VBScript? Why specifically not JS?

I want a simple bug tracker that's hosted entirely within my VCS repository, with a fully featured web interface.

There are a few abandoned projects like this, but they're all CLI-focused with view-only web at most. I want something better than just keeping a TODO file, not worse.

So basically bugseverywhere with a web ui ?

FPGAs in desktop/laptop computers. That would open all kinds of interesting possibilities.

Energy from nuclear fusion.

Highly dense electrical energy source. I want to see electric planes, phones that last for months, drones that stay in the air for days, etc.

Some kind of magical security system so we could return to native code instead of putting VMs on top of VMs. No idea if this one is feasible or how it might be achieved.

Cheaper IC foundries. I think there might be a market for old processes if they were substantially less expensive.

I don't think that some kind of awesome battery will give us phones that last for months. We'd either make the battery smaller (=cheaper) or crank up the processor.

You're probably right, but I guess it would be a possibility if we just replaced the battery in current phones.

For the other things I didn't necessarily have a battery in mind (doesn't change your point though).

Sweet. Thanks for the reference.

Hopefully these cards will become more widespread.

I would really like a (preferably cross-platform) solution for browsing the internet in a maximally silo-ed way. I think Firefox has been making some progress on this, but here's my ideal workflow:

- I can set up any number of "site groups" that sites go into by default - "www.google.com" and "www.gmail.com" are in the "google" group, and those sites open in their own pseudo-profiles with their own history and own cookies, etc. - For a given site, I can open new tabs in any group (or for sites in multiple groups, set the "current group" for the site. This way I can have a github account for personal stuff, github account for a different context, etc. - For sites not on any "siloed groups, everything operates in a "clear after closing" mode, where once all browser instances are closed, all data, cookies and history is cleared.

Right now I just accomplish this manually by using a locked-down version of Firefox for default browsing and different Chrome profiles for each "site group". It helps that I've forsaken more or less all web-apps, so I don't have to worry about accidentally clicking a link in Google Hangouts that will then open in the Google profile.

Have you seen Qubes OS [0]? I have never used it, but the video demo [1] looks to have exactly what you want. You define different "security domains" and can open applications in each. Each domain has it's own data and can have multiple applications. Like you could download a file from your Google Drive (in your Google domain) and open it. But, that doesn't have any effect on other domains. And, those domains don't see that file.

Edit: Just read the other half of your comment. There seems to also be a throwaway domain that you can use that is doesn't persist data after you close it.

[0] https://www.qubes-os.org [1] https://www.qubes-os.org/video-tours/

I specifically meant in a browser. I'm certainly planning on looking into Qubes for an OS, but this is really more of a problem with data leaking within a browser, I don't think the OS has much say in that.

Well, you'd ideally create different VMs for where you want barriers to data leak-age via cookies, history, session- + localstorage, and loading assets/scripts/etc from or making requests cross domain to 3rd parties.

But to your point, even within a given browser/VM/whatever that I intend on using for a single task. I'd still eventually end up slipping up and going to a site that should have been in a different group or logging into something that I wanted in an ephemeral group.

There is a real problem currently with data sharing on the web, where it's pretty much all or nothing. That'll still be true under the scheme you mention, just the goalpost will be moved to within whatever grouping scheme you pick.

It seems there isn't a good way to allow sites to do useful things like ajax to lots of services not on the same domain, while preventing the site provider from sharing your data with people you don't want to have it.

Ad- and tracker- blockers alleviate some of this by preventing certain undesired code from running and thus sharing data. But, user tracking is still possible (and used to actually happen by) using server-side logging and shipping info to 3rd parties.

Ultimately, we are relying on the people or organization behind a site to not share our data.

And, yes the tools could get a bit better with grouping/profiles. But, the vast majority of users won't use these tools. And, even if you use them, you have to be diligent in only opening certain sites in certain "groups" and/or managing which group you are logging into your account in.

So, features like this always will be niche unless there is a revolutionary shift in users' technical ability and attitude towards data protection / privacy.

Maybe visiting a site would locally select the appropriate group that site was assigned to. For example, going to gmail.com would automatically use the "Google" Group. If a site is assigned to multiple groups, prompt the user to select a group. An unassigned site would use a default group, or create its own group for return visits, or create a temporary group.

Yes, this is what I was suggesting. Currently I am doing it manually, but if you can assign groups by site domain, it should feel much more seamless.

An open source & free, ACID compliant, horizontally scalable database that supports versioned data out of the box.

Currently there is way too much time and money is being spent on re-implementing this functionality on normal DBs.

PS: There is Datomic, but it's not free: http://www.datomic.com/

I wanted to say "Postgres with bitemporal tables"---practically the same thing. :-)

I'd like to be able to implant a tiny subdermal body heat-powered circuit in an animal so I can track their location on a map.

Same thing, but very light weight and with a discount for buying in bulk.

(Best friend is an ornithologist.)

This stuff must already exist for humans.

I want a backend service I can call from my statically generated (i.e. jekyll) websites, to implement things like blog comments, analytics, etc.

A true cloud platform where I don't have to mess with admining and provisioning servers and containers. I want to just upload my code and pay for the cycles and bandwidth I use. Aws lambda for http basically.

I started to take a swing at the comments part a couple years ago with firebase: https://github.com/mimming/firebase-jekyll-comments

That's pretty cool, what's the status? Have you had anyone adopt it? The setup seems a bit complex honestly. But generally, firebases nosql store is a turn off imo.

https://github.com/skx/e-comments/ was my take on that - a disquis-like service that allows static-sites to add comments via javascript inclusion.


disqus + google analytics?

I don't want disqus ads, nor google analytics referrer spam. Plus uBlock blocks ga, so it doesn't track the 'ghosts' ie technical users that are my target audience who use uBlock at a much larger percentage that the norm.

I'd like to just implement what I need.

Use goaccess on the server? Or cname ganalytics? I usually use g analytics for things that the server does not see and cross reference with goaccess. You cant make a new analytics without ending up in adblock lists anyway.

Also disqus works without ads. There ad system sucks badly anyway i was making 30 cents a day on a website that makes at least 30$ with adsense. Even thought disqus used up even more space.

"Within C++, there is a much smaller and cleaner language struggling to get out." -Bjarne Stroustrup

I've had a not-specific-enough idea of what that language would be for a long time. As far as I'm aware, nobody has created it yet.

- Realtime DBaaS/BaaS using GraphQL - ie, like Parse or Firebase but where you don't have to use a vendor-specific client, just GraphQL. Sane configuration of permissions/access rules.

- Functions-as-a-service with a better UX than AWS Lambda

- Server-side-rendering-as-a-service for React-based SPAs. When a request comes in it renders the page, sends it up to the client, and automatically 'solves' hydration of data.

- A better scripting language than Bash.. maybe Ruby but with a simpler way of running shell commands / getting stdout+stderr. Should be really easy to test.

You're on my wavelength with these, I've wanted each one at some point. #2 might become a reality.

A solution for game assets building.

Ideally it would be possible to build all assets (maps, models, textures, binaries) in a game from source files to the format that's distributed to people.

Unfortunately due to the build times for certain things like maps it's not feasible to rebuild everything before every release. Combined with the fact that the files are relatively large means that we can't store everything in one git repository.

I'd be very grateful if anyone knows a solution to this.

Please look at https://www.plasticscm.com/version-control-for-games.html

We are. 2 man team and have our own hackeneyed way, some in house bunch of Python scripts to build our game assets. As we do this seriously as a hobby it works :). But now we are at a stage where the 2 games are coming along well and to go full hog, I would choose plastic.

It has integration with Unity and gutsync as well so migration is easy and with 7$ pm cloud plans for 5gb assets works for us.

Last when I had checked plastic it was out of reach out hobbyist / indie game devs. Hope this helps.

Have you looked at Git LFS? https://git-lfs.github.com/

GitLab have it in the Community and Commercial Editions: https://about.gitlab.com/2015/11/23/announcing-git-lfs-suppo...

I haven't yet, thanks for the suggestion, I'll look into it.

I assume you're talking about a large 3d game? Are you using Unity3D? Because if you are, then building game assets involves scripting the Unity3D client itself, which is a huge pain in the ass.

As for the git problem, why not use svn or perforce for the game assets? They handle binaries much better than git does.

Yes, it's a large 3d game made in the Source Engine. Luckily all tools are scriptable using bash.

Using svn for the game assets would work but to build them we would still need the shaders for example, which would live in the git repository alongside the rest of the code of the game. The resulting problem is that the asset git/svn should be triggered to rebuild when the code git changes. I'm looking for a tool that could help with that.

Thanks for the suggestion of perforce, I'll look at that.

> Luckily all tools are scriptable using bash.

Lucky you. :)

This isn't a simple problem and I don't know of any one size fits all solution for this. From what I could tell, every company comes up with its own asset pipeline that fits with their art, code, test and release processes.

I came up with a half assed solution myself back when I was still working in games. We had a separate build server just for assets. It ran a script every hour that pulled down both the code and art repos and watched for changes in certain folders. The shaders, 3d models and textures were linked via naming conventions. Through those naming conventions, the system always knew where to find all of the relevant pieces of any given individual 3d asset (in theory, anyway). It would then create the assets and check them into a separate SVN repo. This system worked ok, but it had a number of problems.

One problem was the naming conventions. The artists didn't much pay attention to file names, so any new asset they created would inevitably not get built because it didn't adhere to the conventions. We thought about replacing the naming conventions with a json file describing every 3d asset, where their shaders were, where their 3d models were, any special settings, where they should be saved, etc. but that would have required either a GUI application to help manage it all or a coder whose job was to manage the data files. The former would have taken time and the latter wasn't much better than what we were doing already. We still needed a coder whose job it was to fix asset build issues, which would pop up on a regular basis.

Another issue we had was build revisions. Game builds are complicated enough and we added the additional complexity of having to specify a branch name and version number for the assets used. It doesn't seem like it would be an issue, except when you have parallel art work going on in parallel branches along with parallel dev work going on in parallel branches things break. In retrospect, we should have put our foot down and said there is only one official asset branch and all dev builds will use the latest version of that branch, but we didn't.

Sorry for the essay, but I hope it was of some use to you. If you ever come up with a universal asset pipeline, sell that sucker! :)

Good to know other people also had the same problem. If I build it I'll certainly try selling it :)

For unity, there was Unity Asset Server. Not sure if it's still available/supported.

More generically, I'd suggest you check out Mercurial with the Largefiles extension.

Could you set this up using gulp?

Yes, the problem is that due to the size most of it lives in different repositories so it's hard to know what version you're supposed to build and when something you depend on changes.

A programming language and toolset that could transpile to human readable code for just about every popular/mainstream language out there. The closest I've seen is Haxe. This would be great for sharing business logic between different platforms. Right now it's a fragmented mess of tools of various quality, many using regexes to get you only 90% there. For example pretty much everything can transpile very well to Javascript, so no problem there. But what if I want to write common code for iOS, Android, and the web? I've never figured out a good solution here besides contortions with C/C++ or Javascript that introduced more problems than it was worth. Either that or you have to go all-in with a big framework like Xamarin.

I think part of the problem is that there isn't a demand, and, if I may rant a little, the reason there isn't a demand is that most developers do not separate business logic and platform-specific stuff (rendering, I/O, etc.) in the first place, whether from lack of skill/experience (which everyone including me is guilty of early in their careers), or the fact that the framework/platform you're working in "discourages" it.

What is wrong with C++ for that? Works fine but maybe you have a specific case? What about C# or F#? That works fine as well for that purpose.

Having done a few years of cross-mobile C++, we get into opinion territory perhaps, but it was never a smooth experience for me. Apologies, but don't have time today to go into too much justification for my opinion. It can be the lesser evil (e.g. you have an existing greater-than 10K LoC C++ codebase that you don't feel like porting to both Java and Swift), but, let's say I want to write a few methods, a few hundred lines of business logic, to validate user input on all three platforms. Sure I can bring out the C++ hammer but I've found that decision comes with more tool-fatigue overhead than it's worth, especially on Android. And I don't know the C++->Javascript story that well, that may be smoothing out these days.

But even if all that worked smoothly, what if I want to move my logic to PHP as well? I know, now I'm getting crazy, but that's what the poster asked for :)

A relational database that scales out infinitely. Is available as a service and payment is per query, a long free period and very cheap

A secure communication technology that is understandable (after reading some mans) and verifiable by an average user, without being infosec expert.

For now, there are so many unknowns in the equation, that users have no choice but to rely on vendor's authority, which means centralization.

Don't tell me it's theoretically impossible because it's possible in almost every other area.

How verifiable? You've seen the Underhanded C code competition? http://underhanded-c.org/

I'm not talking about code bugs and backdoors, which can always exist. I'm talking about protocols, encryption schemes and architectures. We need "comprehensible security", as opposed to incomprehensible, "NP-hard" and (soon) even "quantum-hard" security, accessible to only 0.1% of engineers, let alone users.

We still don't have digital signing technology that is verifiable as easily as paper signature and works peer-to peer, without some big corp/govt involved.

The challenge here is that it's super-easy to design protocols that the average person would consider 100% secure, even after reading some documentation. It's the same problem where anyone can create a lock/encryption that they don't know how to break, but there's always someone smarter or better versed in breaking it.

Subtle bugs don't look like problems until someone figures out an exploit against them.

The One Time Pad is secure and each step can in its operation can be validated by a layperson with pen+paper.

How could the average user possibly verify any kind of modern technology? Programming VCRs is too hard for almost everybody, do you really expect these people to understand even the most basic explanation of how their privacy is protected?

You don't have to know how VCR works in every detail to use it and be sure it does what it should. You just look at the recording and see its quality.

On the other hand, using some security product, you can't tell if it's secure or it's snake oil, even if source code is open.

There aren't any adversaries trying their best to sneak a different X-Files episode on your recording, so you can reasonably assume that checking the quality of the recording is enough to verify that the show you wanted ended up on the tape.

With security software, the threat model is completely different.

1 device with multiple Airplay points that can output to multiple amplified speakers. Price would determine how many Airplay points / speakers exist (1 device for 4 speakers, 1 device for 8, etc...) Bonus points if the device has the amplifiers built into it.

In my house, I have multiple Airport Expresses for the sole purpose of connecting them to different channels on an amplifier to to stream different audio sources to different parts of the house. So 1 point exists for the speakers in the kitchen; 1 for the speakers in the dining room; 1 for the patio, etc...

So the wife could be prepping dinner in the kitchen, listening to 1 music stream and I can be on the patio prepping the brick pizza oven listening to another stream (we have vastly different music tastes)

So I've always wanted 1 device with x endpoints that I could configure each and connect to specific speakers.

Then again, with the routers being put on "hold" - this may be moot and I might have to switch to a Creston or something else.

Couldn't this be simplified by just having an airport express + a set of powered speakers (or appletv if there is a tv too) in each room/location?

I would like to see a SPA persistence framework that is based on the Command pattern, so that it can give you an automatically-generated backend, Operational Transformation-based collaboration, offline functionality, and undo/redo.

haskell on rails. the amazing static typing, binary compilation, etc of haskell, with a large ecosystem and monolith framework (like rails) behind it. haskell's community's focus on challenging, technical, scientific or mathematical problems has left normal web developers plainly not using it. but as our rails codebase has grown, it's hard to identify refactoring issues outside of obessive and overlapping unit tests.

i think Java actually comes closest to what I want here but unfortunately it's pretty hard to get past all the cruft the ecosystem has accrued over the years, plus it's java and marketing that as a company/developer sucks. also java is missing some expressiveness of haskells TS

I'm with you.

Haskell's Yesod framework is pretty feature competitive with rails/django, but it doesn't have 1/10th of the community.

I'd love to see Haskell gain wide popularity, but I don't think it's realistically going to happen. People generally travel the path of least resistance, and learning Haskell requires a lot of time and effort before you can be competitively productive with it. Individuals will choose to make that investment, but the masses won't unless they are forced.

I think the closest thing to what we want right now is node with typescript.

I agree with you both that it would be amazing to see this in Haskell, and that Java (which I'm using full time now, after years of rails) is close-ish to what you want. I think this is what Scala is supposed to be, but I'm not sure it has realized that potential. You should look into Kotlin - you'll still miss Haskell's type system, but it's a good bit nicer than Java, and has a lot less history.

did you switch to java for work reasons or because you wanted to?

For work reasons, but I was very happy to do it. Java would not have been my first choice, but I would have preferred it to another project using Rails, if those were the only options.

so what's `django` for java ?

Sorry, I really don't know, I'm not working at that sort of level at the moment.

Yesod(http://www.yesodweb.com/) may be the web framework of your dreams. Yesod is great for building web apps and APIs. The biggest downside is the long compile time which leads to issues with cheap VPS instances since you need to compile on another machine since the instance doesn't have enough RAM.

Rails but for JavaScript. Today you have to pick and validate a crypto framework to do token based auth. Or maybe not JavaScript.

A distributed application framework. You run software on your OWN machine, offloading computation and storage needs to other trusted nodes. Framework should have the trust system, a data store that automatically replicates, system to explicitly show what data is being shared with whom. Effectively, you can still log into your account from anywhere. The software and data will sync. Won't work for problems that need large scale data and also lowers its collective value. For instance, a social network but no advertising.

Sounds a lot like Ethereum: https://www.ethereum.org/

In other words, a properly decentralised web.

I'd like some sort of AI to tag and classify my huge amount of epub, mobi and pdf books.

I'd like a blend of Svbtle and Medium. Svbtle is the perfect blog editor interface, and I like the way it displays code samples. (Medium's lack of support for syntax highlighting without the GitHub Gist widget is a dealbreaker for me.) But Svbtle has no social features or commenting features, and I like those about Medium. I also like the social "highlighter" feature of Medium. One thing I'd add (back) is the ability to write notes in the margin.

An immediately consistent NoSQL opensource database, so a CP system, that is persistent, gets horizontal scalability right and is popular enough to be used in production with confidence.


Home loan which re-amortizes to the full term every month so when you pay ahead on it, the amount lowers over time. Snowball your home loan, easily marketable.

Doesn't that automatically happen if you over-pay each month? I guess it might be depend on your mortgage type, and location.

I know in the UK I made no special effort to do this. I had a mortgage, and each month I was supposed to pay £260. Instead I paid £800 and I completely paid off my flat many years earlier than expected, with a corresponding saving in the interest I would have otherwise paid.

> Doesn't that automatically happen if you over-pay each month?

Short answer (in the US): Nope.

Basically, by default extra money comes off of the payments at the end of the mortgage, not off the principal.

Now, that said, some mortgages do allow you to apply an over-payment towards the principal, but you usually have to specify it for each payment you send in, they won't make it the default for your account. Oh, and they will likely only recalculate the size of your payments annually.

One final note: Keep in mind that paying extra (regardless of how it is applied) won't count for anything if you hit a bad patch and start missing payments, so if you are going to dump extra funds into your mortgage instead of hanging on to it, first build up a healthy cushion (3-6 months of all expenses).

* standalone stream processing engine ("event sourcing", "complex event processing") that can run queries (similar to how relational database runs SQL queries) and works in publish-subscribe mode, one that is not a proof-of-concept by some researcher, but actually working software

* database for system logs (JSON documents) that is not as brittle as ElasticSearch and not as big of a memory hog

* topic map engine usable from Perl or Python

How about a music player based on YouTube(maybe built on yt-dl, and X-platform), because of youtube's poor support for listening to music.

Youtube generally disapproves of separating audio from video; google 'Streamus'.

What if we don't separate them, just let the video play?

I'm talking about just a good player with nice features (easy playlist manipulation, pre-buffering for next video, queuing up videos, etc.).

I wish there was a good javascript library to handle table (filtering, sorting, paging, ...). The best I found so far was datatable, but it is not perfect.

https://handsontable.com/ is pretty awesome.

I am not very familiar with datatable; could you tell me more about its shortcomings?

IMHO I think Datatable doesn't have a nice and clean interface. For example, you can't just do table.AddRow({col1:'newCol1', col2 : 'newCol2'});

It is not always very clear on what plugin you should use for paging or sorting, column reordering. Then it become heavy to save a state using methods from different pluging.

So could be a lot simpler.

Still the best open source free js table on the market on my knowledge.

I wish we had a messaging protocol/standard so that we could chat with anyone regardless of the messaging app, just like you can browse (mostly)any websites regardless of your web browser.

I would also like the protocol/standard to be iterated upon to include new features like bots, stickers, file transfers, etc.

XMPP exists. Most major IM networks are built on it, but without federation and without being-your-own-client.

A brain-computer interface for entering text into a document without speaking aloud or using my hands. This would allow much faster transcription of ideas and the ability to do so while washing dishes, sorting laundry, hanging onto a subway pole, carrying groceries home, etc.

Swift support on android [I'm aware of kotlin et al, I'm the kind of guy who never got well with Java]

Rubymotion, dozens of js frameworks, Xamarin...

I dont really see why swift.

It doesnt have to be swift, it could be dart or something else, what I'm wishing its a language distinct from Java but with first tier support, ideally supported by google (We all know that JS/Xamarin/Rubimotion aren't there yet)

oh ok. Yeah i feel you on that. I doubt google cares tho. They seem to have rather a lot of crappy apps than a few proper ones.

I stopped doing android ads because i dont like Java enough to make more than a shitty app. So really with you on this

1 - A better laptop : 14 inch, borderless screen like the xps 13, Keyboard of a think pad or dell latitude and battery life of a mac book pro

2 - A better linux distro : basically the polish, convience, drivers and apps support of windows/macos but running the linux kernel

I want more Sandstorm apps [0]. Specifically:

* an issue tracker (like Bugzilla)

* a money tracker (like Gnucash)

* a Q&A thing (like StackOverflow)

* a meeting organizer (tracking agendas and minutes)

[0] https://apps.sandstorm.io/

An easy to use parsing library with some sort of way to import sub-trees (ie: everything between these parentheses uses this other parser) and good default parsers for obvious things like C-style math and JSON objects.

Any parser combinator library. Many of them come with toy examples like JSON and simple arithmetic expression parser.

A parser that would walk all web document elements, exercise keyboard access functions, then report back on what is or is not accessible via keyboard, touch, or click.

If you would be happy with a browser extension or JavaScript library, I'll make it.

I think that would actually be the best way to implement it. A bookmarklet would be nice, however I don't see that fitting within the technical constraints of size limits.

I want a successor to Haskell which follows the Elm mantra.

Can you elaborate on that? What do you mean by Elm mantra, and what do you think is blocking Haskell from succeeding in such that it needs a successor?

A "sharding framework". So I can kinda build my own db and the framework making ~automatic sharding + ~cross-shard-query.

Tivo for streaming video on a PC so you can locally store/rewind/replay things you've watched recently, locally.

Something like cargo, but for C++ projects.

Something that allowed people to step back and evaluate issues realistically, with correct (non-biased/non-skewed) data.

Unfortunately, the best we can do is use stats, studies, and reports. Nothing that's truly free of bias and 100% accurate.

native Ruby support for: 1. amazon lambda 2. iOS apps 3. Android apps

Rubymotion covers 2 & 3.

A Wikipedia for music.

Would it store music, or just information about music?

ios pythonista like software for android

A way of using nuclear power to directly generate electricity, without going through a thermal cycle first. Similar to how photovoltaics work: instead of capturing heat from sunlight and going through a Carnot cycle engine to generate electricity, PV directly converts photons to electricity. Something like this with nuclear power would be revolutionary I think.

Hey dude, I think you're shadowbanned! Most all of your comments show as [dead] :(

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact