Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What are you working on?
374 points by mlejva on Apr 4, 2017 | hide | past | web | favorite | 730 comments
I thought it would be interesting to see what are other people working on. Those projects of course might not be ready to be shown you can only describe them and the main problem though.

Project: I am building a neural network which should be able to generate few frames of the video given the preceding and following frames. Currently I am feeding the network with simple videos I have created where is only a single moving pixel. Since I do not have much experience with neural networks I thought this could be good start.

Problem: Up until now I have not realised how hard is to find simple video datasets.




As a hardware side project I've been designing and building an open source split-flap display - the kind of electro-mechanical displays you used to see in train stations and airports that loudly flip through letters and numbers as they update.

https://scottbez1.github.io/splitflap/

Have a few working prototypes (https://www.youtube.com/watch?v=bslkflVv-Hw), but I'm currently redesigning the electronics/PCB to make it easier for hobbyists to build (i.e. avoiding tight pitch surface mount components - https://github.com/scottbez1/splitflap/issues/12), and I'd still like to figure out how to make them cheaper to build in small quantities.


This is super cool! The clackety-clack is soo satisfying.

I'm working with someone on developing electronic noses (arrays of gas sensors to identify compounds, do process automation), and he too is concerned on keeping the boards easily-built: DIP ICs, through-hole components, etc.

I always wonder what the actual demand is for this. Open-source hardware is great, but how many people are ordering bare PCBs, sourcing each individual resistor, power jack, trimpot, etc and soldering them? I use arduinos all the time and consult their schematics constantly, but I wouldn't dare build one when I can buy 3 nanos for $10.

I'm mostly making an argument for embracing the awesomeness of tiny surface-mount components and then getting a place like CircuitHub to do a fully-assembled bulk run of them.


Thanks! Yeah that's a tradeoff that I've been struggling with. I still think there's a lot of additional learning potential with through-hole components you can breadboard, though you do sacrifice board space, availability of components, and sometimes cost by sticking to through-hole parts.

Since the electronics for this project are simple enough I've ended up leaning towards a through-hole board to pair with an Arduino, but I'll admit that I haven't kept up with the cost of small orders of fully-assembled PCBs these days.


To just throw out a counterexample, this is exactly the kind of project I'd prefer to build with through-hole components, rather than buying a preassembled PCB with SMT parts. On the one hand, I actually enjoy soldering, which I freely concede is bizarre; on the other, through-hole parts can actually be worked with by hand, so that if, for example, I want to replace a resistor with a trimpot or a digital pot in order to play around with flip speed or the like, I can totally do that.

If I were going to turn this into a product-shaped product, I'd probably turn it into three: a bare PCB with a BOM for the buyer to fill, a prestuffed SMT PCB for people who just want to assemble the display and run it, and in between a PCB with a bag full of parts, for those who want what I guess would be a reasonably high-end-beginner-to-low-end-intermediate assembly project without a lot of hassle.

But that's just me! Opinions vary, and mine's worth exactly what you paid for it. This is a super neat project, though, however it ends up!


That looks great! I was about to comment that you can still see a large one in use, but apparently NY Penn Station took theirs down earlier this year.

https://nyti.ms/2kqiicj


They have one at the San Francisco Ferry building, https://www.youtube.com/watch?v=x1gKvyghHsk . They put it up in 2013. Apparently it weights 700 lbs.


Thanks! Yeah, they've sadly been disappearing over time. But now that I've really gotten familiar with the complexity and precision required for each individual character I can totally understand why digital makes more sense than maintaining the old displays.


For some reason I forget what I wanted this for, but I literally was thinking about trying to find where you could order these things just a couple weeks ago. This is super cool. Good luck!


Building one is out of my league, but I would buy one for my desk. </hint>


Heh, I don't have any near-term plans to sell fully assembled modules, though I may consider making kits or something once I've refined things further. In the meantime, you could maybe try https://www.flapit.com ? No idea how they are (just came across them on the internet), but they appear to sell small assembled split flap displays?


This is great. I don't see how I can resist making one. I will probably 3D-print the hardware but use my own electronics and SW for controlling it. (I'm kind of addicted to writing my own software for controlling 3D printers and this looks like it would be an easy fit)

I suppose it's a stepper motor that needs to be moved to the right position? How is it homed?


Currently using an IR reflectance sensor and a hole on the spool gear - you can kind of see it here: https://github.com/scottbez1/splitflap/wiki/Electronics

The sensor is mounted on a PCB that can be physically shifted to apply minor adjustments to the home position. The adjustments could easily be done in software, but doing it physically "stores" the adjustment parameter for each module without having to reprogram the controller.


That looks great. Stanford has an awesome one: http://peterwegner.com/detail.asp?id=212


I'd buy it


Scuba diving computers have become a necessary part of diving. Current models have indecipherable interfaces that hang from dongles or are worn as bulky wristwatches. I have developed a scuba diving computer HUD with a simple graphical interface placed in the diver's mask and is easy to learn and use.

Also, scuba divers must maintain neutral buoyancy during the dive. The current method is manual, making it a difficult skill to master, and creating a dangerous risk for new divers. I have developed a physics-based automatic buoyancy compensator for scuba divers which is a technological advance that replaces the current manual systems.

With these two innovations, you would not need a certification course to dive safely.

Finally, wetsuits are made with neoprene, an air-bubble infused rubber. These highly buoyant suits force the diver to wear extra weight during the dive. They also compress at depth so the diver must compensate for the changes in buoyancy with the buoyancy compensator device. I have developed a wetsuit material using silicone and an additive that is a better insulator than neoprene and neutrally buoyant.

Eventually, I would like to put all three of these together into a complete recreational diving system.

www.nautosys.com


This is cool stuff and I wan that HUD, but I take pause with the following statement:

> With these two innovations, you would not need a certification course to dive safely.

There is a lot more to diver certification than learning how to control buoyancy and how to read your computer. You need to be comfortable breathing through your mouth, you have to know how to share air, you need to know how to configure and connect your equipment, what materials are appropriate for what kind of dive. You need experience with some trusted people. Most importantly, there's only one way to learn that spit and baby shampoo are the only useful defoggers for your goggles.


It's true a big part of the certification program is acclimating to the underwater environment and learning emergency procedures. I guess I should say it would streamline the certification course. It would probably reduce the all-day classroom portion to a one-hour video.


At least two ways: scuba diving and skiing.


I agree. I remember doing my basic certification. 1/3 of the class almost failed just from the test that requires you to take your mask off underwater then put it back on and clear it. The second they closed their eyes while underwater they would freak out.


I agree, just holding your breath is a huge no-no in diving. So is learning to deal with emergency situations. Diving certification will still be needed in some form, even if it is reduced.


I love the sound of a dive computer HUD - though I'd be curious to see how it would be affected if your mask leaks. I guess the 'watch' would still be present - probably a good idea as masks can get lost (I had mine accidentally kicked off by a fellow diver once).

I do take issue with your assertion that certifications can be cast asunder. A lot of what's learnt is about safe diving (e.g. don't dive after you fly, safer dive profiles, what to expect at different depths wrt buoyancy changes, dci, getting narced, etc.) and what to do when technology fails.

Wetsuit buoyancy has a due to neoprene is pretty handy if you surface and are in distress. If you're towing an unconscious diver, for example, and are only relying on their bcd to keep them afloat their legs drag terribly (as is the case for dry suits) and it makes the rescue very very difficult. There are other safety advantages too - it's not clear to me why getting rid of buoyancy is a great idea.


I believe buoyant wetsuits are a safety feature - if you run into issues, you can release your weight and float to the surface.


But the buoyancy variation with depth is pretty dangerous, I think. I do not know if that variation is truly the fault of the suit or the fleshy diver.


Neoprene compresses at pressure, which means it's less buoyant at lower depths.


Yes, I agree, that is what I was saying also. I just wonder if the human body doesn't also change buoyancy with depth, making neoprene density variation pretty negligible by comparison.


From what I understand, no. The only thing that compresses is the air inside your lungs, but you keep on breathing in & out pressure compensated air, so the buoyancy difference doesn't change as you go down. Neoprene compression has a fairly large effect, pounds worth of buoyancy. Look at this spreadsheet for example, at depth you lose 11.6 pounds of buoyancy due to the neoprene compressing: https://www.scubaboard.com/community/threads/the-ultimate-wi...


> Finally, wetsuits are made with neoprene, an air-bubble infused rubber. These highly buoyant suits force the diver to wear extra weight during the dive. They also compress at depth so the diver must compensate for the changes in buoyancy with the buoyancy compensator device. I have developed a wetsuit material using silicone and an additive that is a better insulator than neoprene and neutrally buoyant.

Awesome. The wetsuit might also be used by people with neoprene allergy. I have been using a suit from Fourth Element which does not trigger any allergies but doesn't keep me very warm either.


Have you looked into adding navigation / mapping / and comms to your dive computer? My company has a side project building a dive watch for divers. It has a navigation system and communications aspect where divers can send messages to each other and track a dive buddy's position. The project is mostly for fun but there is a market for it.


How do they track position? I know that (eg) GPS and Bluetooth don't work under water (or at least their range is so bad that they aren't useful). Seems like an interesting problem!


You are right! It is a very hard and interesting problem. Radio will not work underwater. Inertial/mag sensors can be small and cheap but the quality is poor. Therefore I have been using some 'intelligent' (hate to use that word these days) processing of the data. It is still a relative position estimate if you can minimize the drift you are good to go. To help, we have communications between divers using acoustics. Using some cleaver DSP techniques and information sharing between divers you can do a lot.


Are you hiring? I would love to work on this. This sounds incredible.


Interesting concept- do you have a product/prototype yet?


This sounds amazing. Is there anywhere I can learn about your wetsuit technology? I'm interested in the crossover to surfing.


I have a project idea that might be useful for someone to learn about audio processing (and maybe neural networks?). I like to listen to audio and watch video of podcasts (and lectures and other human speech) at faster speeds. Sometime, especially if I'm trying to "skim" to see if the media is worth listening to carefully, I'd like to listen at 3x or faster. Very often, the limiting factor is the intelligibility of the actual words rather than mentally parsing them.

Some software already removes complete silences, but this is a 10% effect and I think this could be taken much further. I would love audio software that could manipulate high-speed human speech to improve intelligibility by preferentially compressing parts with low information content (like vowels and "ughs") and uncompressing, or even "repairing", info-dense parts like sequential consonant sounds.

I've looked around and haven't been able to find anything like this. Could make a nice stand-alone app, or a library to sell to a podcast player.

http://softwarerecs.stackexchange.com/questions/27175/video-...


I'd find that really compelling.

Consider hand-producing a sample via manual audio editing, to demonstrate the limits of what ought to be possible. Find some audio you think could be listened to at that rate, see how fast you can listen to it via standard sound stretching techniques (e.g. libsoundtouch and similar), and then demonstrate how much better you can do with hand-editing. Worry about how to automate that after you successfully demonstrate that possibility and make it compelling.


Yea, it's a good point. We can logically distinguish between the basic audio problem (which might be really hard) from the automation problem.

On the other hand, suppose we somehow got good training data by getting a bunch of audio samples at the same number of words per minute that were graded by human listeners as easy or hard to understand. Then in principle something like a neural net might figure out what audio features were responsible for intelligibility and then adjust the non-intelligible audio in that direction (a la using convolutional neural nets to make pictures appear in the style of a famous painter without changing the content). This would be done automatically without any humans actually understanding the solution.


Sure, you could try any number of things to produce a solution. But even if you try an approach where you don't know at first what might work, you should likely still put effort into figuring out what features made it work, so that you can improve it further and maintain stability.


Nobody else has mentioned anything along these lines so I'll do so.

If you haven't talked to the blind community about this sort of thing already I would strongly recommend doing so as they'll be able to rapidly point you at the bleeding edge of what currently exists - they routinely use text to speech cranked up to illegible speeds.

(I also heard of one guy who would listen to TTS from his computer with one ear, while using his other ear to hold a phone conversation. I believe I read this in Thunder Dog: The True Story of a Blind Man, His Guide Dog, and the Triumph of Trust (978-1400204724 / 1400204720).)

The second reason I suggest this is that the blind community is a dense populus of users who use systems like this as part of their everyday lives, so if you aimed such a system at them the feedback quality would be extremely high and allow for ridiculously fast iteration times and a great product. High-speed speech (TTS in particular) is a genuine technology hole/need.


Great comment, thanks. Agreed that blind users would be at cutting edge of this and provide the most useful feedback.

> they routinely use text to speech cranked up to illegible speeds.

Note that this is not quite what I'm talking about. TTS is a slightly different problem since you're constructing the speech, and you can actually choose voice synthesizers that sound weird but remain intelligible at high speeds. On the other hand, trying to modify existing voice audio (with no text) for greater intelligibility at high speeds is a different problem.


>> they routinely use text to speech cranked up to illegible speeds.

> Note that this is not quite what I'm talking about.

Good point. I think I forgot to fully qualify that statement - while initially composing my reply I got completely distracted with TTS and forgot this was about altering speech to go faster. I realized and went back and edited it a couple minutes later, but didn't adjust it sufficiently.


I did a podcast/audiobook player app ('lectr') with gap removal and had experimental support for what you describe. Roughly, it removed samples if there was no significant difference in the frequency-domain spectra. This was a simple threshold test (the threshold could be adjusted by the user).

It works for some material. Some speakers (usually professional book readers) have even pacing and it's more effective. A simple 1.5-1.7x speedup is also useful there. Podcasters tend to be 'bursty' in that they speak rapidly and then pause, and gap removal was more useful there.

I stopped working on it as my lifestyle changed such that I wasn't using it. There was almost no commercial interest in the project.

I'd love to see a library, especially with MOOCs etc becoming mainstream. Most players provide 1.25/1.5/2.0x options, but between 1.5x and 2.0x is the sweet spot for me, and almost no apps provide that.


Thanks for your insightful comment! Is Lectr still available anywhere? Nothing showed up on a Google search.

> between 1.5x and 2.0x is the sweet spot for me, and almost no apps provide that.

Agreed, this is super frustrating and is some evidence that the market isn't that interested. (Why would they pay for advanced compression if no one bothers with trivial stuff like finer gradation?)

Fwiw, on Android I use Audible, PocketCasts, and Audipo for audiobooks, podcasts, and general audio, respectively. They are all serviceable and have 0.1x or finer granularity.


Not available any more. It was for iOS. I pulled it from the App Store a few iOS releases ago because it was crashing on startup.

I've taken the website down but you can still view it on archive.org: http://web.archive.org/web/20160329053206/http://www.lectrap...

I still use Swift for finer gradation. It also worked for video files, which was fantastic (mine never did that). Swift is 32-bit and will stop working soon. I haven't found a replacement yet.

(On iOS, at least, the 1.25/1.5/2.0 thing is because that's what the OS provides for very little effort. Finer controls require use of a supported-but-undocumented API or AudioUnits.)


HTML5 video speed can be controlled with a bookmarklet on Desktop at least:

    javascript:document.getElementsByTagName("video")[0].playbackRate=4
You probably don't want my 4x speed as chosen here ;)


Interesting idea. I used to watch Berkeley webcast videos at 1.5x to shave off ~30 minutes from a 90 minute lecture. Any faster wouldn't be intelligible.


I do the same. I usually watch all lecture videos at 1.5x or 2x speed. Saves me tons of time which is good because usually to videos are of random interesting topics that are distracting me from work.


It is the case that you can teach yourself to understand faster and faster speech. The blind often have human interface devices which speak at absurdly high vocal acceleration.


This also depends heavily on information density. I can listen to some types of content at 2x without issue; other content I can't accelerate more than 10-20%.


Some content I have to scrub and rewatch even at 1x speed.

Some content I can watch at 4x.


Blind people regularly teach themselves to listen at 4-5x speed.


This is doable. If one were, uh, hypothetically, to write a library that did this, how would the HN community recommend monetizing it? I have no experience selling my wares to anyone but a large company, let alone an app store or something like that. Would the tool primarily be applied to nice clean audio, like books on tape, or also required to work with significant background noise?


IANAL but I can't see any harm in releasing some demo samples of what your system can do. To be fair, an example of how the system breaks down it gets pushed too hard would probably be useful too.

With that in hand, perhaps you might create a new post showing the samples and asking for advice. (Maybe you could also let people make sample requests via comments?)

It's possible that you may receive offers from people interested in being business partners (treating the situation like a startup); have fun with that ;)


An audio version of those bots that summarize news articles could solve the problem in a different way.


At some point it might become useful to feed the audio in to speech recognition, then feed the result in to a Text-to-Speech engine. You will lose all of your prosody and speaker characteristics, but blind people have their screen readers at crazy speeds so it will stay intelligible.


I'd definitely use something that did this :) I usually speed up youtube videos/podcasts to 2x. Many times this is possible because the content that is presented is already known or is easy to understand. However, this breaks down when learning something new.


'Podcast Addict' (for Android) can speed podcasts up like this, up to 5x. Usually, I can comfortably bump the speed up to 1.25x-1.4x without any trouble. 2x for me ends up being too fast, but useful if you're skimming and not trying to absorb every second of the audio.


Another solution to getting through long audio segments quickly is learning to read spectrograms (time/frequency intensity graphs of speech).

Then you translate the problem into speed reading.

I'm (slowly) investigating this, expect a HN post in a few years :)


Is the goal for this to happen in real time?


That'd be nice but not really necessary. Biggest use case would be podcasts and lectures, which are usually downloaded in advance so there's plenty of time for off-line processing.


Problem: The process of getting thoughts from your head into an organized, written draft form isn't as fast or accessible as it could be.

Project: I'm building a conversational UI / bot (https://writing.ai) that helps people write faster. The basic idea is that it asks you a series of questions about a topic, asks follow-up questions for more detail as needed, and when it's done outputs a completed draft. You're still providing the content, but the system understands the structure of completed documents and knows what questions to ask to get the actual writing done more easily.

I just quit my job and started working on this full time last week, so no public version yet, but signups are very welcome.


This is a really interesting project. I just signed up to get notified of launch (andrew @ indentlabs.com) but if you want to shoot me an email with what you're working on and where you're at, perhaps I could help out or give you some feedback on the idea/implementation.

Either way, always good to have more eyes occasionally. :)


Thanks, sent you an email!


Echoing same sentiments as drusepth - would love to provide feedback and test things out if helpful (michael (at) michaeldempsey(dot)me


Thanks Michael, just sent you a note!


This is super awesome! I soo need this. Drafting with a structure is a part I often skip when trying to write something, which is why writing has become quite hard. I should stop doing it. But your idea is great, looking forward to seeing it in action!


Cool. I need a conversational AI to help me make good decisions about my to do list.

Orgbot: I'm sorry Dave, you won't have the energy for 3 meetings and a Meetup on Monday. Let's spread the meetings throughout the week and find something you can do in your pajamas while hung-over.


This looks really intriguing, I look forward to seeing where you go with it.


Appreciate it! I'm looking forward to it too, at this point.


I also find this fascinating. I just signed up..ryan@recraigslist.com I also have another use case I'd like to bounce off you if you could shoot me an email. Thanks!


Email sent, happy to hear about your use case!


What a cool idea! Writing is my least favorite part of development, just sent an email (loxias AT mit DOT edu), can't wait to try it out.


Thank you for the kind words!


Sounds very promising, I can think of some professional applications in software development. Signed up.


Thanks, and happy to hear about any applications you have in mind! Feel free to email nate@writing.ai if you'd like to discuss.


Sounds interesting. Is it more for creative writing or more business oriented?

I'm signed up in any case.


Thanks for the signup!

My initial target is short-form content such as blog posts and essays. I'm going to wait to see how that goes to decide what to focus on next, but odds are that it'll be more structured content like academic papers or technical reports.

I'm definitely not ruling out an eventual focus on creative writing, but it'll take a bit for the system to get to that stage.

That said, I'm designing it with the ability to ingest annotated writing samples of any sort, so it's possible that a wider range of writing types will be supported sooner than expected.


Looks very interesting. I guess I might be fighting a loosing battle here - but any plans for an offline solution? This is the kind of thing that I'd probably love to use on the train (many tunnels, no 4g/spotty in-train wireless) - and on flights (wifi is comming, but not always) -- or other places without good network coverage (composing a blog post on a hike..).


Yeah, I understand, it's a completely valid use case. I'm personally much more productive when I'm completely disconnected, so I really identify with the request.

I had, in the past, also been thinking a bit about how the system could work offline because it would avoid a lot of security issues for something like a medical office that wants to create content that's then copied into a medical records system.

My initial thought was to bundle a local copy of the server with pre-trained models, but that becomes problematic on mobile clients. I'm writing the server in Go, so if I go that route I'd probably need to reimplement parts of it in another language and avoid using any remote APIs.

So the answer is: very likely yes, but not initially. You've definitely moved the functionality up my planned features list.


A self-contained binary would work fine for me (If it could run on a Surface Pro 4 - or as a VM image - eg. hyper-v and/or virtual box).

I always prefer a Free software/open source solution - but I'm not sure how you'd monetize that. Maybe charge for the app (ios/android) - and provide a free/open self-host server solution, along with a subscription service and a web client? (The payment for the app would also grant access to the subscription, and for those that didn't want to self-host/wanted to support the project could pay for the subscription and use the web SaaS solution?).


The plan is to offer a subscription service even if there's an offline component. For a bunch of reasons, it's the best approach for a one-person venture trying to get off the ground.

I've been an OSS user and supporter for a long time. I don't think I'll be open sourcing the core system anytime soon, but I'm very likely to release any useful NLP or ML-related libraries that are created as part of this project. Not sure what those will be or when they'll be ready, but I do have a mind to give back to the OSS community.


I understand. Fwiw I recently had a look at "cyberduck" again - they do a gpl+free(nag) binary+sale through windows/Apple appstore:

https://cyberduck.io/

Not sure what kind of income they see, though.


I just signed up and am awaiting launch!️️dkermitt@gmail.com


Thank you! I'll keep working to get it launched ASAP. The response from HN today has been extremely encouraging.


Problem: My experience taking audio tours at various museums showed the use of antiquated and expensive hardware. Most alternate solutions used mobile apps which are inconvenient to download and take more time to release.

Project: I went about developing a web app that allows anyone to quickly create an audio tour for free:

https://www.youraudiotour.com/create

I also integrated Amazon Polly to automatically generate audio based on text inputs. This further increases ease of use by eliminating the need to record and edit audio.

It was a great learning experience since I learned rails from scratch and don't have a background in programming. Now to see if I can get more people to use it!


I like this idea but extending it to the whole world based on geolocation. Use an app and earphones to find voice/text descriptions of places and landmarks, monuments, sculptures, buildings, etc. Everything can be geo-toured.

Of course in a museum is perfect but based on geolocation it doesn't have to be linear, you can go anywhere in the museum and get voice/text about that particular place/masterpiece and when they are changed then easily change the audio/text for the place.

Love it!


Really cool idea, I think people would love that! The biggest hurdle would be getting enough interesting content. You would almost need to create a Wikipedia for real world locations.

If you were able to get enough content I think that would be an incredible app/service.


I'm doing something similar for local news/events, called SeeAround.me, where people can see/submit local news stories and their locations. But I could see sort of a cross between that and Your Audio Tour as particular interesting for people who want to do their own walking tours, for example.


I already made that for a client. Don't think users will write stories from their mobile, at least good ones. It was difficult to start with no content, too similar to twitter.


I find that's the trouble with alot of good ideas I have. They work great if you can get to scale but that's easier said then done!

Is your site still up? I would be interested to see it.


> You would almost need to create a Wikipedia for real world locations.

Exactly. Wonderful service with world wide use and translatable to all languages. You definitely would need "curators" for every place and at the beginning let people add their own transcripts in order to build a huge database.


Nice! Check out https://www.detour.com/, similar platform primarily focused on city-level audio walking tours.


Cool, thanks for the link! Looks like a great way to explore new cities.

I'm hoping my site will be more helpful for smaller, less tech savvy, organisations who want to create their own tours. Many of these sites create tours in house or charge a pretty penny to create your own app.


Nice! I'll be passing this along to my museum friends. I could see this being especially useful for smaller museums that may not have the budget to buy a full application or edit the audio. Do you think you'll still keep the custom-audio option, or only Amazon Polly?


Thanks! Appreciate the kind words. The target was definitely for smaller museums that lack the time and budget for a full tour. I noticed a lot of these museums don't have audio tours which I think is a shame.

I actually had the options to include custom audio and record audio in the browser. I took them out to simplify the product but I would definitely consider re-implementing them if there was demand.


You should check out guide from Casa Batllo in Barcelona. Definitely the best one I have seen so far https://www.casabatllo.es/en/visit/videoguide/


This is awesome! Would be interesting to add BLE Beacon support for autoplay when you walk up to a specific area on the tour.


Thanks! Ya there are a ton a features I think you could add to this - beacon and location support would be very cool.


Great idea! Would it be implementable with Google Project Tango? Currently working on an idea with museums, and they don't want to handle any installations (i.e. beacons)


At the moment I haven't considered how I would implement that feature. First I want to see if I can get people to use the basic version.

Do you have a website? If so you should share it, would be interesting to see what you're doing.


I see:) We are in the same boat, too early even for a website! Will share an update once we have some progress!


Problem: I don't know when the optimal times to go to Krispy Kreme for hot donuts are.

Solution: http://hotdonuts.info/

And yes, this is the most important problem in the world.


I used to work at a PNW Krispy Kreme and the hotlight hours there are wildly off. For the store I worked at hotlight is 5-8AM and 4:00-6:00PM.

In my experience, most Krispy Kremes (at least the ones franchised by Kremeworks, which owns the stores from Vancouver, BC to Beaverton, OR and then one in Maui) have pretty set hotlight times with a variance of a few minutes due to human error.

There are also sometimes "bonus" hotlight hours where they make donuts in between normal hours due to demand.

If you're going by the Krispy Kreme API, it's totally dependent on when people actually turn the physical hotlight on, and people tend to forget to turn it on or off, especially during the "bonus" times.

Also, they only turn the hotlight on for the Original Glazed donuts, but that totally leaves out the hot cake donuts and fritters.

There's not a lot of advice in this post, just some insider info. It's definitely a cool project though, but you might be better off just asking your local store when hotlight times are.


I believe the MacArthur Foundation will be contacting you soon.


You have a problem with zip codes that start with 0 eg 07001.


Dammit, I didn't even know that they could start with 0. Thanks!


New England, New Jersey and parts of New York start with a 0.

http://www.mapsofworld.com/usa/zipcodes/images/usa-zip-code-...


In high school - BASIC class - we had to build a program to let a user input an address.

I took the input for the ZIP code as a string, and the teacher 'corrected' me because ZIP codes were numbers, not letters. I said "if I starts with 0, that would be lost".

"ZIP codes don't start with 0," she replied.

"Umm... yeah they do." I pull out a copy of my New Zork Times from Infocom, located in MA, and their ZIP started with 0.

It was only years later I learned the ZIP code system didn't even start until the 60s, and she'd likely grown up without it even being a thing, so I (retroactively) cut her some slack. Really didn't think anyone in the US could not know that in 2017 but... I still run in to people who don't. And... it's probably less important today (what with email and ebills and whatnot), so I'll cut everyone else some slack too... :)


> Really didn't think anyone in the US could not know that in 2017

Well I guess you are just much much smarter than us. That goodness you cut us some slack. ;)

I don't know if I have ever seen a zero prefixed zip code.


yeah... was trying to be a bit tongue in cheek (my tongue, my cheek).

I guess I just pay too much attention to addresses/formats, and have for years.

You can have as much slack as you want, FYI :)


TIL. Thanks, fixed it!


Fixed it!


I'd love to use this, but due to an even bigger problem, unfortunately my body can't handle Krispy Kreme donuts: they're way too high in calories.


Is the start date supposed to be in the future?

"The scraper came alive Tuesday, March 7, 2017 at 12:33 AM."

Or am I in the past...


It's April 4th, 2017.

But it's okay… I thought it was 2018 earlier today.


totally sharing with all the rest of my friends in atlanta. sometimes fun and simple ideas are the best!


I just found an official Krispy Kreme App that does the same: http://krispykreme.com/hotlightapp

But I'd rather not download an app.


Yeah, I knew about that, but I wanted a way to see historical data so people would know "the light will probably be on at these times". It uses the same API.


I love it! cool project


Prepping notes before I dive in to redraft and finalize Ghost Engine, my new space opera for July 2018 (UK publisher will be Orbit; US publisher TBA).

Also working on a Wild Cards short story for George R. R. Martin, and awaiting the copy edits on Dark State, the second Empire Games book (publication scheduled for January 2018, by Tor).

And in the queue behind that, is the scheduled final rewrite of Empire Games book 3, Invisible Sun (due for publication in January 2019, from Tor).

This should keep me busy through to the end of the year!


Project: I'm building an 'abstract visual debugger,' which aims at clarifying the behavior of algorithms under execution by letting you watch the data they're manipulating.

Screen: http://symbolflux.com/images/avdscreen.png

Quite outdated video: https://youtu.be/sdEo4v2yivM

It works by monitoring data structures (and soon general objects!) in your code, and sending operation data (e.g. element added/removed) to the server app which does all the actual visualization. Different clients can be written for different languages, though I've only written one for monitoring Java code so far (but the clients are semi-trivial to build).

Problem: a lot of what's difficult in programming is that you can't watch the 'state' of your program as its running; we approximate it with print statements etc.—but there is no easy way to view trees, graphs, tables, lists, hashmaps, etc. especially when they are being actively modified by a program.

I'm hoping to have an alpha ready in a week or two. Back to work!

Edit: split into Project/Problem format.


This is a fantastic idea!! Do you have a github link or a project page I can bookmark and periodically check on? How hard would it be to adapt to C and/or C++?


> How hard would it be to adapt to C and/or C++?

I don't think it would be particularly difficult. I can tell you more about what the process would be like if you're curious.

Project page is here for now: http://symbolflux.com/projects/avd (Everything there is unfortunately quite out of date because my initial work on it was ~2.5 years ago, and In the past 3 months I've picked it up again and ran with it :) I'll update the page soon though.)


Also, here is the source for the Java client: https://github.com/westoncb/DSViz-Java-Client

It's still a work in progress, but I plan on wrapping things up for an initial release over the next couple of weeks—at that point, I'll write a tutorial explaining the structure of the client and principles involved in writing a new one etc.

Edit: forgot to mention, I don't have plans for open-sourcing the server at the moment since the plan is to sell it so that I can continue working on it.


You might be interested in this: https://visualgo.net


I, along with my co-founder, are working on a shopping platform for furniture and decor seen on the set of movies and TV shows: https://www.seenonset.com

Startups definitely like to deck out their offices with some of the nicer high-end designs. Even Y Combinator's home page image carousel has the likes of the Bertoia Diamond Chair and the Nelson Saucer Pendant Lamp on show! Though a lot of startups go for the replica route - I know Airbnb went with replicas in their Dublin office.

The show Silicon Valley (https://www.seenonset.com/tvshows/174/silicon-valley) is a great example of what we do.


This is beautifully thorough. I can't say I'd purchase items based on their appearance in TV shows. But, the idea of a catalogue of the items and the time it must take to create this is beautiful in an artistic way to me.


Thank for the kind words. We actually also try and cater for folks who are doing a more typical approach to online shopping (e.g. a desk lamp). The set design is a nice way of showcasing the product in use.


I really enjoy seen on set, I've been checking it out for a little while now. Keep up with the good work!


I'm designing a 3D printed robot arm that doesn't suck.

It uses all brushless outrunner motors for at least the 4 large axes (final two in wrist may be hobby servos for now).

The arm is similar dimensions to a human arm. Initial calculations show it will be able to lift more than 2kg at full extension if I can make it strong enough. Actually the (rough) calculations say more than 10kg but that would break something. It's also 200 milliseconds for 90 degrees of shoulder movement, supposedly.

All hardware and software is open source, including the brushless drivers.

It uses low cost 3D printers. I am currently using a $320 Monoprice Maker Select modded with a 1.2mm nozzle and I will be adding a $450 TEVO Black Widow modded with a 2.5mm dia nozzle to the mix. I may try a pellet feeder. The goal with the large nozzles is to increase strength and reduce print time so it's not maddening to print. This will make it easier to iterate.

My goal is to make an open source low cost arm that is useful for manufacturing, and then design open source workcells for it and actually use it for productive work.

You can look at the jumble of CAD BS right now. https://cad.onshape.com/documents/5b474270e4af0ef979e6fade/w...

Find the tab with the assembly called "Arm3 Assembly".

Please fork it and contribute.


Cool. The onshape link asks me to login right away. Without logging in, I don't see your 3D item. You're invited to also upload the robot arm to www.3dprintmakers.com.


Onshape is free and cloud based. It's good because you can easily create an account and edit the file yourself, but a bummer because it's not free software and you can't export raw files, only dumb solids.

Despite the drawbacks, I'm keeping it in OnShape though, where anyone can fork and edit the file for free.


You should look into Automata. They have built some impressive 3D printed robotic arms with very tightly integrated industrial design.


will it be fully 3d printed (no metal/carbon tubes)? I think the main problem with hobby robot arms is flex in the arm itself and backlash in the motors.

Low cost arms (eg. dobot) have "0.1mm repeatability" with no load but the moment you put a kg on the end effector it starts flexing.


Aside from bearings, motors, and possibly some fasteners, it will basically be entirely 3D printed.

I'm using printed spur geartrains that will have all kinds of backlash. In an effort to make something that is primarily printed, I'm throwing any hope of precision out the window. At least mechanical precision - we can use visual servoing to get more accuracy at the end effector.

But my hope is that the other benefits (low cost, easily changeable design, good speed and strength characteristics) will make it useful even with poor rigidity.

I imagine for example pulling things out of fixtures and moving them in to boxes.

There must be a good variety of tasks that can be done with an arm like this. I'm working to the strengths of 3D printing and not trying to fight what it isn't good for.

That said, I may reinforce the frame with a carbon fiber wrap if more stiffness is needed.

The goal is to have a low BOM cost AND low tooling cost so I want to avoid the need for any metalworking equipment.


A couple...

1. Record one album a month. An album must consist of no less than four songs. At least one song must be an original. I can only record on a 4-track cassette recorder.

Problem: I'm getting older and find myself nostalgic for the days when I was running my record label and playing in bands. A small, manageable project with no expectations or demands to scratch that itch and get me away from computers for a few hours a week is nice.

2. http://postgra.ph -- Almost ready... just a landing page to test an idea: A GraphQL Backend as a Service powered by PostgreSQL. I just have to add the SSL certs tonight when I get home.

The MVP is based on ideas from PostgREST/PostGraphQL -- generating the API from the public schema of the database. It'd be the bare-bones service that I could throw together in a couple of weeks.

If it takes off then I'd look into integrations, adding PipelineDB support, auto-scaling, etc.

Problem: I just wanted an API-as-a-Service that would give me full control of the data schema but didn't require me to write yet-another-web-service-in-dynamic-language-framework-foo. There are nice solutions out there for different folks but I'm a big postgres fan and wanted something that didn't require me to learn a new framework, interface, etc.


It's going to take more then a couple of weeks, been working on it for more then 1 year :) https://subzero.cloud/


How do either of you plan to handle authentication and authorization? How will you handle CORS? Just curious as I've worked in this realm as well.


Authentication and especially authorization can be completely handled by PostgreSQL. In front of it all sits OpenResty (nginx) so that is where you would add whatever headers you would need


jwt's are a touchy subject but was the well-trodden route I was planning to follow for authentication.

Integration with auth0 and other third-party services would be a roadmap thing for me.

Authorization can be handled by PostgreSQL: it has built-in facilities for role-based access control and row-level security. You can develop the authorization scheme that fits your application.


You're way ahead of what I'm thinking of. :)

Nice job.


For a service like this to work, one thing needs to be solved, automating the code deployment (i am talking views/functions/roles/grants/RLS). As far as i know (and i've asked other people) this is not a solved problem. This is what i am working on now. The rest is done


I was with you on your #1 until I saw 4 tracks- I'm not sure I could even get myself down to that few. I know the Beatles did it, but they were far greater musicians than I. I haven't listened to a cassette tape since 1999- how's the quality?

I think I could maybe do 4 tracks if it recorded to an SD card. Do you find the length of a tape adds to limitations in a way that keeps you brief?


> how's the quality?

Good. I record on high-bias Type-A tapes. You still get some hiss and tracks bleed the tiniest bit. But I think it sounds pretty good for the kind of music I'm making.

I've been using the procedural drummer in Garageband as the drummer for some of my songs and with a decent amount of swing it sounds "authentic" on tape.

> Do you find the length of a tape adds to limitations in a way that keeps you brief?

You get about 60 mins of record time per tape so... not really.

I find I like recording this way for many of the same reasons I like writing on my typewriter: zero distractions, low friction between thought and recording.

The restriction of 4 tracks and that I only have a month to record 4 songs keeps me from "fidgeting" with a song. I'm not able to aim for "perfect." It's more ritualistic. I show up on the same evenings in the same space. I begin the ritual by opening my journal and listening to last weeks tracks. I record some ideas, experiment. I end the ritual with a glass of bourbon. I close the book. I've written one song I think is pretty decent so far. Try as I might though they each have little tiny, beautiful flaws. Only so much you can do.


I am developing a new TCP/IP stack targeting embedded systems primarily. It is being written in a restricted subset of C++14 (e.g. no dynamic memory or exceptions, but virtual functions are great).

There is a LOT of things complete already, it pretty much works. ARP, IP(v4), TCP (with NewReno congestion control), PMTUD, DHCP client. The design is single-threaded around an abstract event loop that a user would generally need to implement unless they found one of two provided implementations useful. The focus is correctness and reliability rather than performance, hence no DMA support for now and maybe forever.

Source code is here: https://github.com/ambrop72/aprinter/tree/ipstack/aipstack

Actually this is currently in its testbed project (APrinter firmware for 3D-printers) where it supports the integrated web interface using a custom HTTP server.

In addition to the one embedded platform where the firmware currently supports Ethernet (Duet board), it is possible to run on Linux with a TAP interface which is my primary testing setup.

If anyone is interested (in assisting development ;) I can help explain things and show how to set it up.


How familiar should one be with networking? I know some C++ and know the low level C apis for maintaining sockets but never implemented them. I also never took a networking class.


It helps if you ever looked at the packets for example in Wireshark and have at some understanding of how TCP works. But the most important thing is the ability to read standards (RFCs).


Project: R.I.P.Link - a tool for finding dead links on the web[0].

The inspiration for this project came from Wikipedia and the Internet Archive partnering to fix broken links on Wikipedia[1]. After briefly searching around I couldn't find any great tools for this, and I decided it would be an interesting side-project. I was also looking for a good medium-size project to improve my Go skills and understanding of concurrent programming.

From here I'd like to implement recursive searching functionality and depth limiting. I think this would greatly improve the appeal of the tool.

* [0] https://github.com/mschwager/riplink * [1] https://blog.wikimedia.org/2016/10/26/internet-archive-broke...


I made a link tester for all the links on my webpage, after I did not maintain it for a while and half the links were dead: https://github.com/benibela/site/blob/master/manage.sh#L33


Like many people, I use Evernote as a personal Knowledgebase. I started this because I realized a lot of my bookmarks were rotting. But of course, links in my evernote notebooks face the same problem (as much as I try to capture the content).

Would it be possible to extend your tool to search Evernote, bookmarks, Confluence, and other link lists for rotten links?


It's definitely possible! I'd just have to teach RIPLink to ingest data from different sources. I.e. have it parse a text file (or other type of file) instead of HTML. Would you mind opening an issue on Github?


It would be neat if you could extract the desired data (text) from the page and hash it so that you could later fix the link (by pointing to a new valid link).


aim it at SEOs if you want to make money, broken link building is a pretty big strategy in the SEO world


Yup. An old but probably still used grey-hat strategy was to identify dead/forgotten websites with lots of 'seo juice' (had high rankings, google associates them with keywords strongly, etc). Then you buy that domain and host your new site on it, taking advantage of its history.


Interesting! I like the idea!


We're building a new type of wind turbine that generates energy using significantly less material, making it cheaper to install. We do this using huge fixed wing kites made of carbon fibre. Reqd more qnd follow our progress on http://kitex.tech


This is awesome! Are you hiring/accepting volunteers/do you need help? It's a really interesting project.


Thanks! We like it too... We're not hiring at the moment, but if you drop a line to "kugel at [companyname] dot tech" maybe we can find some common ground! (Except that I've also briefly been a fruit picker in Japan, great project btw!)


Currently a student in college and I'm working on https://www.60secondseveryday.com, the fastest way of keeping track of your memories.

You get a phone call every night and record your 60-second response to the question "How did your day go?". From there, your response is archived into your private online journal and displayed alongside your photos, twitter posts, check-ins, etc from that day.

Soon, you'll start getting Flashback emails (ex. "Here's what you were doing 6 months ago") with all of those cool things so you can reflect on your past.


That's cool I've been kind of thinking building something like this. Are you using Twilio? How are the transcriptions?


Thinking of removing them since honestly, they aren't very good. Yep using Twilio. Would love to chat more if you're interested.


Have you looked at IBM's Speech to Text? - https://www.ibm.com/watson/developercloud/speech-to-text.htm...

Not sure if there is a difference but might be worth trying a few different services.


Project: Hosted & On-Prem fast full-text search with faceting, filtering, multiple ranking algorithms and plenty of other features.

Not yet ready for launch but built a simple demo trying to get into startupschool ( failed unfortunately :( ), which lets you search every hackernews post while letting you filter based on domain / user / story type.

http://searchhn.com


Thank you all.. I am finally getting some search requests. I applied for startupschool and was desperately looking at the logs everyday for someone to try it out :)

With 50K plus amazing companies that applied, it is very difficult to stand out :( Slowly building a team with some of the best engineers that I had the pleasure of working with to take this to the next level. We badly wanted to get into the startupschool to help guide us and get us to the next level. Wish we were part of the program, but glad everybody gets to view the lectures :)


Cool project. Could you briefly talk about

* the backend you use and how it will scale to sites with large amounts of data across servers

* can third party sites integrate your search service?

* How is it different from eg- Algolia

Good luck with the project!


Thank you !

Backend is custom built written in C and assembly. Supports sharding and replication which is rack aware and data-center aware.

> can third party sites integrate your search service?

Yes of course.. that is the end goal.

Algolia is awesome.. but you end up paying a lot based on how many ways you sort / rank data. This operates with an on-the-fly ranking model and rank on any field in any direction. Also different ranking algorithms, extensibility with Lua and a lot more when I officially do a showhn


You tried any open-source ones in c/rust ? Are you doing anything differently/better (what/how) ? What are you using for replication/sharding ? Possibility to split-shards ? What are you using for server-backend-framework (ex seastar) ? Any libraries etc that you're using (i'm interested) ?

You have to write a really long blog post on why you've chosen this way.


Yes.. this will take a very long blog post. This started many months ago as a project to learn 'golang' and as a way to index my everygrowing collection of music / movies / documents / subtitles / lyrics and everything on my servers.

Got hooked into it and became obsessed with speed and rewrote everything in 'C'. Replication is based on 'Raft', actually the multi raft variant proposed by the amazing folks at Cockroach (https://www.cockroachlabs.com/blog/scaling-raft/)

It does not use a backend framework. It is a simple http/https server (epoll + multi-threaded) which talks json. I use Jansson for json and utf8proc for unicode handling. Index is custom built.

I have been working on low powered distributed systems for over 10yrs, which certainly helped. Will definitely let you know when I get that blog post written :)


This is amazing. Are you also going to implement Soundex?


Thanks.. yes that is something I'm planning. Keeping the engine super flexible. Currently does TF-IDF, Okapi BM25 or an Algolia like tie breaking algorithm.


really cool! is there a way to search by domain?


Thank you.. Yeah the current UI sucks Im sorry :(.. just search for a post that you know is in your domain and choose the domain checkbox on the right. You can then change the search text or any other filters after that.

Will probably fix the UI this weekend and do a proper 'Show HN' with more options, charts and analytics.


In light of today's Apple Mac Pro announcements, I'll chime in...

Problem: Building macOS desktop applications as an indie-developer.

Project: Fileloupe for Mac and Videoloupe for Mac

Fileloupe is a lightweight media browser that I actually announced on Hacker News a few years back. Videoloupe was just released and is a video player/editor in the spirit of the older QuickTime Pro 7. I work on both of these full-time and I'm currently in a coffee shop in Bangkok. The jury is still out whether or not being a macOS indie developer is sustainable...

https://www.fileloupe.com https://www.videoloupe.com


I have to say the file size for File Loupe is impressive - only 5MB for all that functionality.

PS Is it written in Swift or ObjC?


Both are 100% Objective-C. No plans to make the switch anytime soon. I'm comfortable with Objective-C and understand it reasonably well.

Applications can be pretty small when they're 100% native and include almost no artwork or auxiliary assets.



Sorry to go off topic but are you living, holidaying or "digital-nomad"ing in bkk?


uh wait... what Apple Mac Pro announcements?


Discussion on Hacker News:

https://news.ycombinator.com/item?id=14031619

Daring Fireball "The Mac Pro Lives":

http://daringfireball.net/2017/04/the_mac_pro_lives


So, a slight bump to the existing machines and a promise for a fully revamped one next year.

I have the most recentish Mac Pro. It's a fantastic machine, I wish it had sold better, but perhaps the proliferation of iDevices and laptops more or less killed it except for special niches :/


wow this is great! I've been searching for a tool like videoloupe for a long long time!!


Glad to hear. Feel free to follow up by email with any feedback, comments or questions.


Grep for the internet.

What I often want is not a search engine, not a recommender, but a filter. Something that would allow me to look at the distributions of content on the Web rather than trying to answer my questions. I badly wanted to pay someone a few quid for a service like this, but had to build it myself.

Feel free to piggyback on the next batch job; use fBd7guQLDLx6RIm00GE7uH5h0Lk1CKKl as access key.

https://alpha.crawlfilter.com/


Cool.

Suggested secondary source: https://archive.org/details/alexacrawls?&sort=-publicdate&pa... (spotty; sometimes the crawls are dark and can't be read)

Also: when you get lucky with ACD: https://redd.it/5s7q04 (I've heard other users getting hard-capped at 100TB though)


Project: An open source home automation solution. Currently, I have code for a thermostat (https://github.com/alittlebrighter/thermostat), garage doors (https://github.com/alittlebrighter/rpi-garage-doors, android client: https://gitlab.com/igor-automation/garage-door-remote-androi...), and a bare bones webcam (code inside of the garage doors repo). Planned features are a unified client for each service and then remote control via encrypted configurations stored in Firebase.


My main current project has to do with my home media system.

Any sufficiently complex media system needs either more than one IR or RF remote, or it needs a universal remote.

The best universal remotes are activity based, and maintain information about the state of the overall system.

However, it's been my finding that many of them (looking at you, Logitech) focus on the software at the expense of high-quality hardware that's pleasant to use.

So, what's a person to do? Implement customizable command logic for a variety of command output formats (serial, IR, HDMI-CEC) state management (power on, power off, muted, projector screen open, lights on or off, etc.), and logic (power button does this if we're in state 1; otherwise do that and shift to state 3) in a webapp interface that runs on a Raspberry Pi and that can learn your favorite dumb remote's buttons via LIRC.

I'd love for anyone with an interest or a possible use case to reach out; right now, I'm just writing for myself.

Code: https://github.com/haikuginger/riker


Everyone is writing about their startups. My startup makes self-driving delivery robots (http://robby.io).

But that aside, one of the things I'm working on is trying to use RNNs to create a better digital piano. Even the best digital pianos out there are far inferior to a even a YouTube recording of a concert grand piano. One of the biggest problems I notice is the complete decoupling of resonances between strings; most digital pianos treat notes independently and just sum up the audio signals. In reality it's a giant, heavily interconnected physical system with tons of resonances and nonlinearities, and I want to see if some signal processing combined with backpropagation can be used to abuse a neural network to simulate the energy transfer in a physical system of that complexity.

I haven't been terribly successful yet, but it would be amazing if there existed an open source digital piano that performed spectacularly and could be plugged into an el cheapo weighted keyboard for decent piano sound.


Do you know pianoteq?

https://www.pianoteq.com/


Thanks! This is interesting.


Pianoteq is without a doubt the best sounding piano synth on the market today. Uses all sorts of interesting physical modeling algorithms (including string-to-string resonances, with a more-than-first order model...), unfortunately, unpublished. ;)

If you're interested in chatting about sound generating software and algorithms, feel free to shoot me a line. "Also, do I remember you from ec-discuss?"


Yep I used to live at EC :)


Are you sure that you are comparing with the best digital pianos? I mean they have been working on more life-like digital pianos for decades, going back at least to the Kurzweil stuff https://en.wikipedia.org/wiki/Kurzweil_Music_Systems


Surely you mean "What should you be working on?"

Too many projects, nothing is ever finished :(

I envy the people who a) have some creative ideas and b) manage to ship them. More than once an offhand remark of "wouldn't it be nice if we had X" made me write the damn thing in a much better way than I could have ever described a project of mine. Maybe that's also the reason I'll probably never start a company again (and if, it would be consulting again, not a product)

To not completely derail the topic, I recently launched a small microblog at http://f5n.org/nano/ - mostly to test a web framework and also to not clutter my blog with small blurbs. As you can see I immediately stopped using it after the launch :)


funny enough, the service i'm building is to help people with such problems of nothing getting finished.

I believe in you, indeed in everyone, a great potential of skill and creativity that is waiting to be unleashed with the right words at the right time. I think a lot of people could use just a little help, and it can unblock them in huge, life changing ways.


currently making low-cost and low-powered tree cameras that will hang from Atlanta-area fruit trees and send us once-a-week tree photos.

The idea is that we can hang them in trees all over the metro area and keep an eye on when they ripen. This is mostly powered by the Twilio programmable data service and the Ai-Thinker A20 (https://www.aliexpress.com/item/DIYmall-ESP8266-A20-Wifi-GPR...) -- WiFi and 2G cell radio for $14!

We can likely get battery life in the range of several months, so we can put the sensors up at the beginning of the season and then take them down when we harvest.


I was thinking of doing something similar actually! That looks pretty cool.

Do you know roughly how big the images that the camera takes are slash do you happen to have any example images?

I have a few more questions that I can't think of at the moment, but I'd be happy to give you a hand with any battery-related stuff (a specialty of mine) if you'd like! My email's in my profile, feel free to reach out even if you don't have any battery questions yet.


Haven't received this camera yet, but it advertises as 0.3MP, which I assume is 640x480 (640x480 = 307,200). We might have to play some with camera placement too in order to actually get relevant images.

I'll drop you a line!


What kind of a cell plan (and pricing) do you use for this?


Currently we're using Twilio Programmable Data. $2/month/sim and then $1 / megabyte. I think we can get by with 640x480 photos, but that's what we'll be experimenting with this season.


Problem: We believe document versioning is broken for most people: writers, attorneys, academics, journalists, etc. We engineers have Github, but others don’t. So, we decided to solve this problem and create Tuiqo

Project: Document versioning for humans

We built a prototype: http://tuiqo.com (video demo: https://www.youtube.com/watch?v=vBj8ezqLCOs). We are accepted to YC Startup school founders track.


Quick note - your homepage says 'try without registering', but clicking on the editor button prompts for a login.


Hey, thanks for replying! This definitely shouldn’t be happening, would you mind sharing which browser/OS you’re using? Do you maybe have cookies/javascript disabled? You can send a quick mail to dzeno at tuiqo.com if you want.


Looks like you fixed it. :)


I'm working on a programming language. Just as a hobby, not commercially. :-)

Design-wise it's really just how my ideal language would work: functional, compiled, statically typed(higher-ranked, impredicative, but type-level functions are first-order, and no effects system), mutable refs that are _not_ GCed, first class delimited continuations, equi-recursive _and_ iso-recursive types, implemented in C so the compiler itself has a good performance baseline, etc...

It has been really fun so far and an amazing learning experience. I intentionally didn't read up on type theory, so implementing the type checker was a huge challenge.

Amazing feeling when I finally got it to (correctly) work, though :-).


What do you mean you didn't read up on type theory? I'm guessing ypu've studied it at a university?


No, I really didn't know much about type theory. (And I don't think I can study that at my university actually).

I had an intuition of how parametric polymorphism worked because of my experience with Haskell, but I didn't know a type inference algorithm works, how that integrates with checking against an assumed type (and the whole unification shebang that comes with it); how universal quantification plays into that, the amount of pit falls that come with certain design decisions(impredicativity etc.).

I personally gain the most insight into something when I'm trying to think about it from first principles, so I avoided extensive research about type theory beforehand so that I wasn't "tainted" by preconceived ideas :-).

Of course I got stuck quite a few times. I would then look how Haskell does something and trying to understand why and how etc.


Project: I am working on improving the 2 factor authentication (2FA) user experience for end users.

Problem: 2FA is an east way to drastically improve one's security posture with many sites (e.g. AWS, Github, Google, Stripe, etc), but it is still an incredibly annoying user experience that gets worse the more sites you use it with.

- When I pick up my phone to enter a 2FA code, I often get distracted by an email, text, or other notification. I'll put my phone down a minute later and think "what was I doing? Oh right, I need that 2FA code".

- It is also annoying to visually identify the correct site/account combo in my list of 2FA codes because I use many online services and may have multiple accounts at each one (e.g. AWS).

- Though some apps have a better UI presentation of 2FA codes, the classic Google Authenticator app shows all of the codes in a single list and I would often put in the incorrect code from a row above/below what I intended because it was difficult to visually keep track of the correct row as I transcribe the 2FA code into my desktop browser.

- It is annoying when the 2FA code changes while I am entering it in my desktop browser. Often, sites will accept the previous 2FA code as well, but if I only entered the first 3 digits and don't recall the last 3 digits, then I have to start over entering the new 6 digit 2FA code.

I am working on a new user experience which replaces these pitfalls and annoyances with the ability to simply click a button on your phone as your second factor of authentication. This workflow is compatible with any site that currently implements 2FA (e.g. AWS, Github, Stripe, etc, etc) and provides the same level of security as using another 2FA app such as Google Authenticator, Authy, etc.

It would be really encouraging/useful if you could leave a comment explaining why you might find this new 2FA UX useful or not! Thanks.


Just curious, how are you planning on approaching this problem in a way that apps like Authy aren't doing?


As johnmaguire2013 guessed, we will have a browser extension which will request a 2FA code from the mobile app. The mobile app will receive a push notification and ask the user whether they would like to allow or deny the request for a second factor of authentication. The user only needs to click one button on their phone and the 2FA code is securely sent to the browser where everything else related to submitted the 2FA code can be automated.

The browser extension can integrate with any site that currently supports 2FA without any integration or changes required on the part of the sites.

Let me know if you have any more questions! Do you think you be willing to change your 2FA workflow to the one described above? If no, what are some of your concerns, thoughts, etc? Any and all feedback is appreciated!


It sounds like he's going to support OTP 2FA, likely through a browser extension? That's my guess anyway.


Yup, you nailed it. That is exactly the plan. Any thoughts on that approach? Do you think you might be willing to update your current 2FA workflow to the one described above?


I think it's a very cool idea! The other big UX issue with 2FA (in my opinion) is backup & restore -- nail both and you'll have a pretty solid product.

For disclosure, I work for Duo, so I'm a big believer in push-based 2FA. (Consider applying if you're interested in usable security!)


Ah! Duo is definitely one of the incumbents in the space that we looked at during our competitive analysis. As far as I understand it, your push based 2FA solution only works for sites which use Duo as the 2FA provider. Is that correct?

I am hoping to build a solution which has a similar sounding UX to Duo Push, but works for any site that currently implements 2FA without requiring the site to make any changes at all. I think that this will provide more comprehensive coverage of sites that developers and other users interact with on a regular basis. For example, Github will not update their backend to use a 2FA service that I write because they already have a good solution in place, but by using a browser extension I can build the UX that I want without any changes required on Github's end.

Admittedly, I had some trouble getting started with actually trying out Duo to get a feel for the UX, but I will definitely have to check out the features that you provide to see what competitors in the space are already doing.

I agree that Backup & Restore is another prime part of the 2FA UX that needs some TLC. We've got some thoughts on improving that as well, but the first step is to nail the UX of actually being productive with 2FA and then come back to add enhancements.

Here is to some healthy competition! :)


Yep, we have integrations for many services, but software must integrate or support SAML (as Github Business/Enterprise does) for us to do 2FA. Our core product isn't really 2FA however, and we have different target markets: Duo primarily targets businesses looking to protect the services their employees access, while it sounds like you're trying to provide better UX for any consumers of 2FA.

I completely understand your approach and think it's a really neat idea. Looking forward to seeing it. :) Feel free to connect with me via email, I'd love to beta your product.


Thanks for the background on Duo.

I'll definitely reach out once we have a beta to demo. We'd love to get some feedback from folks outside our immediate team!


I'm working on an Amazon Alexa app that can send your phone arbitrary push notifications.

Ever since I bought a dot, I've been frustrated with the relatively high friction of sending data from the dot to my phone (why do I have to open the Alexa app to see the full weather forecast? why can't Alexa send me a link to a full Wikipedia article? why can't Alexa start composing a text for me? etc). The solution is to build an app which can route you from an Alexa request (e.g. "Send me the Wikipedia article on X"), to your phone, to the appropriate app.

There are relatively few use cases that I've found so far, but I think the Alexa -> Push interface is a cool one to explore, and it's been really cool to work with the platform and finally get to the point where my app receives a notification from the Alexa cloud. Open to suggestions for this if anyone has any as well!



We're building an SMS chatbot for the Canadian cannabis market. It allows you to purchase marijuana directly from dispensaries and licensed producers in a few messages. We’re adding NLP and a basic recommendation engine to help increase retention and drive purchases.

We built out the infrastructure in a hackathon and are grinding away at an MVP. Starting to demo for dispensaries soon. I think we’re really on to something! People have been texting their drug dealer since.. well, the advent of cell phones, so we aren’t changing behaviour. It’s just nicer to chat with a friendly bot (with dispensary to doorstep delivery) than exchanging goods from a blacked-out Malibu in a Walmart parking lot.

Landing page here: http://hicanna.io/ Super happy to discuss our stack or anything else if you’re interested in the project.


Hey, not sure if this is helpful but I have a side project that might save you some work: https://www.smsinbox.net


Very cool - I'll look into it


I think you're on to something. Not much time left before July 1 2018, I hope you can build a good team of partners/PR. Good luck!


Project: A website containing programming projects accompanied by explanations, unit tests and etc.. (a la tutorials) to help beginners to get off the ground quickly.

Audience: It is aimed at learners who already know the syntax of a language, but are unsure/unable to start a project of their own.

More info: I have written more about it on Reddit: https://www.reddit.com/r/learnprogramming/comments/62r1wr/i_...


This looks like a nice fill-in between "How I start":

http://howistart.org/

and the aosa-book "500-lines or less":

http://aosabook.org/en/500L/introduction.html

(and maybe with a hint of rosettacode in the mix: http://rosettacode.org/wiki/Rosetta_Code )

I'd love to see some collaborative projects like this - idiomatic/recommended setups of editing/debug/release, as well as approach to coding.

I notice that Python is still absent from the "How I start"-series - something like https://github.com/pypa/sampleproject might form part of a starting point (also the Flask "flaskr" tutorial sets up a bare-bones python package, but needs minor adjustments for windows, as I've noticed using it for a small course on web programming - I'll have to find the time to file an issue and patch).

The great thing about such projects being open to contribution, is that aside from the bike-shedding, one of the best ways to get a correct answer to a problem quickly, is to post the wrong answer on the Internet.


I'm building a 21st century farm.

I'm working towards building an ag farm that nets $100,000/yr with only 8hrs/week of actual work. No clue how I'm going to get there but that's the fun stuff. Right?


Possible issues: capital, interest, land acquisition, zoning, weather, water, climatic region, crop seed availability, need for supplementary transportation (seeds, equipment, fertilizer, etc.), unpredictable transport overheads, etc.

The fact that you are sharing hard figures before a method really shows that you haven't advanced your thinking very far. Plants don't need technology to grow, they grow anyway. I'm not a farmer either but I can tell you the expensive part of farming is not the growing, it's the capital acquisition, land acquisition, transportation overheads, preparation, crop selection, harvesting, getting to market, and venture risk mitigation.

The most expensive part is often harvesting. The traditional US solution is "industrial scale farming and industrial scale harvesting equipment". The problem is that monocultures really suck in terms of biological efficiency (you start to need pesticides, fertilizers, machines and fuel for their deployment, crop rotation, wind breaks, etc). Better is inter-cropping, where you have different plants together in the same field. A smarter, robotic harvesting system to effectively harvest these naturalesque "mixed" fields could really be a game changer. Then again you could just employ Mexican immigrants like everyone else...

Another option would be an automatic guerilla farming drone to avoid paying for land. You see this done manually a lot in China. Something that can identify an area, clear it, plant it, potentially monitor or water it, and harvest it semi-autonomously. You would probably need a very high value crop to make this work, security would be a problem, and only certain types of unused land would function (legally, proximity-wise, security and visibility wise).


Thanks for all the advice. If I wasn't on mobile I'd try to respond to key/valid points. A lot of these are very real issues and it's going to be interesting discovering what are the limitations and bottleneck for a lot of these activities? Is there a viable, small-scale solution?

You mentioned harvesting and this is what I'm actually looking forward to most. Immigrant labor is getting more and more expensive, costly, and risky. Entry-level farm hands will always have a place, but we're going to see a lot of growth in automation equipment. I'm planning for it in my designs and once the costs reach near parity it'll basically be my solution and catalyst for scaling.


Interesting. If it can be implemented in an apartment, I am game :)


Fortunately that's one of my starting grounds. I have a 8' x 8' x 8' (tall) sun room in my apartment that I'm trying to plan for. This first year I'm only shooting for $10,000 with a 6 month grow season. I had plans for sod/turf but my buyer bailed out on me.


Yes, I'd be interested as long as it's legal. Illegal urban farming is known to generate a lot more than $100K/yr.


How do you get started with something like that? Do you have a blog or anything?


Lately I've been following Curtis Stone[0] on YouTube[1] and he's offering quite a bit of advice.

There's quite a few people out there doing this stuff. Google "urban farming" or "urban horticulture" and see how much that stokes your curiosity.

0: http://theurbanfarmer.co/ 1: https://m.youtube.com/user/urbanfarmercstone


@mlejva https://research.googleblog.com/2016/09/announcing-youtube-8... - half a million hours of video, with item bounding box metadata to boot. Or if "simple" is the hard part, just YouTube and your favorite keyword - "animation", "minimalist", "simple", etc. The advantage of your problem space is that any video will provide a TON of input data, since you get 1500-3600 frames for every minute of video.

I'm working on two different side-projects, and have been for far too long - working may be a bit of a stretch. The first is a CRM for job searchers, turning a must-have from the business world around. The second one is a password/secret manager with audit as a first-class citizen, focused on enterprise sharing and reporting.

I have a "fun" one too, to build a Google Moderator clone as a hosted service. I'm doing that as my intro to Serverless, and will probably do a write-up of how to build and launch on AWS with little/no fixed-cost.


I'm building a stock option calculator for startup employees. It walks you through how to collect the information you need to know about the stock options you own, or have been offered, and figure out how much they would be worth in various scenarios.

http://www.optionvalue.io


I would have found this incredibly useful when I was doing research to understand stock options that I have had in the past. Sounds like a really cool project. Keep at it!

Also, to get a more accurate estimation, you will have to know the amount of investment in each round and the permissions that come with different classes of shares. For example, liquidation preferences and investors deciding whether they will convert to common shares or not, etc.


Thanks! I'm definitely planning on adding liquidation preferences -- a lot of people have been requesting it.


Problem: It's hard to configure Vim/Neovim to work like an IDE, and the terminal UI isn't the same as something like Atom, Sublime, or VSCode. Vim plugins for those editors never quite hit the sweet spot for me in terms of using the muscle memory I've built up.

Project: Oni (https://github.com/extr0py/oni), a Neovim front-end with out-of-the-box IDE functionality (right now, supports JavaScript & TypeScript).

Cool to see what everyone is building!


I'm working on https://programmercv.com, a résumé builder with superpowers:

- import from linkedin, xing or stackoverflow - multiple résumé versions - track your job applications - export to any format you need (doc, pdf, html, xml, json) - publish on Github Pages - No lock in: build on open source tools

(most features are still work in progress)


Have you investigated linkedin profile import feasibility yet? Their API is really restrictive and they're blocking most of the hosting provider IP ranges.

Ridiculous that they restrict access to user profiles even if with the explicit user permission (via OAuth).

Let me know if you find a solution :)


Yes, their API is useless, imo. At least at getting stuff out. They want their users locked in.

I'm building a tool thats parsing .html files (your public profile), and extracting relevant information. Build on nokogiri, and open source. More info here: https://programmercv.com/resume-exporter


Nice, cashing out on the fact your audience is technical :)


Upvoted for Mr.Robot's sample resume :D


Yesterday, I published a library for automatically generating 3D models. Input is a simple list of unordered, undirected vertices and edges (soup). Output is a triangulated mesh (as vertex, normal, and index arrays), with consistent winding order, and per-edge weighted normals. When it's done the graph analysis once, then you can give it any set of vertices with the same topology, and it will instantly give you back adjusted normals. Or, you can pass a deforming function or lambda, and it will apply it to the vertices, and give you that mesh instead.

https://github.com/andy-wood/AutoMesh


I'm working on a Slack-based RPG called Chat & Slash: http://chatandslash.com/

It's been interesting, trying to shoehorn popular RPG tropes and expectations into Slack's interface. It's turned into something akin to a MUD, although the multiple-user interaction isn't there. I guess it's more of a massively-single-user-dungeon?

Anyhow, I have about 50 testers in right now and about a week's worth of content. I'm just about to start working on the next major area, which should hopefully add about a month's worth of content. I'm always happy for more testers and more feedback!


Software defined radios that can work in extreme conditions (like during or just after a giant disaster), an IoT kernel that does useful work in a mesh network without a back haul, a personal 'internet radio' that can stream your home market AM/FM stations to you anywhere, and computer system to teach the mid-levels of computer science between programming and and database design.


Where can I read more about your SDR project? Is it a transceiver or just for monitoring communications? I'm a HAM (KK6BXK) and starting a website (http://www.survivalscout.com) that plans to cover this type of stuff. I try to link out to neat projects like this


Could you explain more about the computer system to teach computer science ?

Would it be like an iron python notebook ?


In my opinion, there is a gap between what you can learn on an Arduino type environment and a Raspberry Pi type environment. When I started using Cortex-M level processors (32 bit, very simple MMU (protection only) I realized that it could be the basis for a "PC/AT" type system where you could explore everything from a simple monitor based operating environment (think MS-DOS or AmigaDOS like) through to a self hosted set of compilation tools. The hardware is pretty easy (there are lots of different ideas in this space) but the curriculum is currently still a bit weak. So with the addition of a simple FPGA based frame buffer I've been building a system that can be used to teach a student computer systems.


Made this web app to help you figure out how your favorite language is doing in the job market / what to learn next.

Every month I scan the previous months’ Hacker News 'Who Is Hiring' thread and build these stats. Hope others find this useful. Constructive feedback welcome.

This is only the first version. Next I'm planning to add more data for previous months/years as well and show the evolution of individual languages over time.

http://langstats.azurewebsites.net/


I like it.

A couple of little things:

    You've got both "Objective-C" and "Objective C"

    It would be nice to separate out the APIs, like Cocoa and QT, or make a different graph for APIs/Frameworks


Thank you, hadn't caught that. Will definitely add more graphs for frameworks and other things in future versions.


Damn, no FORTRAN.


Really neat website. Would be awesome to set a time frame or pick a month etc.


Thank you, will be adding that in next versions.


Neat app!

More

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: