All: apologies for the interruption, but don't miss that there are multiple pages in this thread, with over 2000 posts by now. You have to click through the More links at the bottom to see them all. Later pages have all kinds of stuff that is just as interesting. It's kind of incredible.
(We intend to get rid of pagination once the next implementation of Arc is ready.)
If you do make it monthly, my recommendation is posting it on the 15th of every month, to stagger it with the other regular threads. This way you don't make the start of the month a scramble of postings.
Mondays could also work, since it would allow people an extra weekend to finish what they want to show. Also it will allow them to procrastinate at work.
I'd say the Saturday after the paycheck closest to the 15th. People on bi-weekly pay schedules have funds again and it's counter cyclical to the 'Who's Hiring' threads.
Sunday might be good because then the question "what are you working on" is associated with what you might be working on over the weekend, i.e. things you're doing for curiosity, fun, or obsession, rather than a workaday job.
Replies to this top comment have been quite a job to juggle. My approach has been to reply and then detach them, so as to minimize distraction at the top of the thread. Unfortunately, that has led to the same questions being asked over and over, so I'm going to move all the replies underneath this stub, and then collapse it. The reason for a stub root comment rather than just collapsing all the replies is that a list of dozens of collapsed replies would take up most of the page.
I'm also going to partition them by topic, since there are so many.
Not necessarily. I seem to remember that HN used to be without pagination, all the comments were just listed on one page (without any JS to load more comments etc) but servers started to struggle when HN got more users which led to sometimes too many comments, so they added the pagination.
Somehow I associate this with the thread when Steve Jobs died, but not sure how accurate that is. My memory is kind of sucky.
What is endless about it? You’d assume there will be a finite amount of comments when you load the thread page, and the new implementation can just render that.
As with many things in "modern" web development, "endless-scroll" is not really endless (as nothing is endless). It's a technique similar to how Facebook, Twitter and others continue to automatically load content when you reach the bottom, giving the impression of "endless" content. Of course, even Facebook and Twitter would eventually run out of content to show, but somehow that pattern got the name "endless scroll".
See also: isomorphic, real-time, serverless and countless others examples of poorly named concepts in web development.
I know the feature they alluded to, thanks for the explanation. Comments on a thread are finite at any moment in time though, you wouldn't find "related content" as you were scrolling a specific thread page on HN. That was my point.
I was very surprised when I first found out FB would run out content. I had assumed that it would just go further back in time, even if the feed was not strictly chronological.
Oh I certainly didn't mean HN would do infinite scrolling. That'd be gross, too complicated, and would contradict the old-web style of the site. But if we can just generate entire pages quickly enough, I don't see why we wouldn't go back to doing that.
As a quick fix, can you add a hyperlink to the next pages at the top of the page [“Page 2”, “Page 3”], rather than just a single “More” hyperlink at the bottom of the page?
People frequently think that their comment has been deleted because it doesn't show up on the first page. We introduced it as a performance workaround, so if performance recovers, I'd rather stop. If rendering the whole page causes other trouble, we can reconsider the problem from scratch.
Why would pagination be a good idea? From my perspective, there's no downside (HN is exclusively text and fetching+rendering the entire first page of comments for this submission takes the same amount of time on my computer as rendering a no-comment submission) and several upsides (no need to open several tabs to see all comments, no need to make several more round trips and wait several more seconds per page of comments, no "attention cliff" where comments on the n+1th page are significantly less noticeable than those at the bottom of the nth page, allow pagination to be handled by user agent/browser instead of being enforced by server).
The downside is having to serve larger pages which consist primarily of content which will not be read. This site is running on some resource-limited hardware as I understand it, so limiting the maximum potential size of each page served means more pages can be served more quickly, especially if you just cache the first page of a thread (which is all most people will engage with) rather than the entire thread.
You're confusing cause and effect - most people only interact with the first page of comments because they're on the first page, and they don't want to click through. If you disable pagination, then suddenly far more people will read those comments that would be on the second page.
Additionally, request count matters more than data transferred. It's much easier to serve 1MB to each of 100 users than 1KB to each of 100K users. n people are already going to view the comment thread for a submission - a several-dozen-KB increase in the amount of data that you send each of them (assuming you're serving them statically) results in anywhere from "little" to "imperceptibly" additional CPU load.
Oh you guys. More an example of how using the perfect programming language for your project allows it to succeed, thrive, and eventually meet the challenges of growth :)
HN wouldn't exist without Arc, so it's pointless to argue about this. But I love talking about it so I'm going to anyway.
The feedback loops between the language, the HN software, and the living system of the community go very deep. I could write a lot about that. It's one of the most interesting things about the project, though unfortunately not visible. The software just works its Lispy magic behind the scenes, remaining small and malleable. It's still only 15k lines of code, including the language implementation, and that code does a lot.
On performance, it's pretty cool that Arc has managed to run HN through 12+ years of growth without much optimization. It's a good sign, not a bad one, that we're only doing major rework for performance reasons now. HN is far from Reddit-scale, but still: the application runs on a single core. (Though we do cache pages for logged-out users and serve those from an Nginx front end.)
As long as we're on the topic, consider this: the software for both HN and YC was just a single Arc program (and not a large one) for the first 9 years of their existence, during which they went from nothing to massively successful to industry-changing. Written by one person, programming part-time. That is a staggering achievement. The power of using the right language for your project goes far further than most people dream. Our imagination about this is crippled by path dependence, social proof, and the conditioning that comes from only ever doing things the same few ways, like those fish in experiments (which may be urban legends?) who stick to their corner of the aquarium even after a glass barrier has been removed. The solution space of software and programming is so much larger than most of us want to imagine that it is. Sad.
I'm not saying that everyone should use Arc—language/programmer fit is a key part of language/project fit. But when all three variables align, incredible things become possible. Not only HN, but YC would not exist without Arc. Another case that came up recently was Cloudflare; very different language, project, and programmer, but a similar story (https://news.ycombinator.com/item?id=22883548).
I appreciate you taking the time to reply to this, but come on, “YC would not exist without Arc” is a definite exaggeration. YC’s impact as a platform is immeasurable, but the software is quite simple that could be coded in any other programming language just as well (I’d argue better since users wouldn’t be waiting now for new features to land if it weren’t for the language update).
You can write any program in any Turing-complete language, so that's obviously not my point.
Rather, it's that there's a deep interdependency between the three variables of language, programmer, and problem that give rise to a system like HN+YC (which was a single program until 2014). If you changed any one of those variables you'd either have gotten something radically different or nothing at all. So my statement is a bit like saying that YC would not exist without PG; and your objection is a bit like saying: that's an exaggeration, any person who did the same steps at the same time could have arrived at the same place (and perhaps better since that person might have been a better manager as well).
(Not only PG built YC, of course! But PG wrote the software and that was a critical piece.)
I'm not saying that all programs have this property about programming languages—rather, that some do, and they tend to be particularly interesting and creative. For another example, one might say that Unix would not exist without C.
There would be more interesting and creative systems in the world if we were more open as a community to these unexplored spaces. We exclude them in order to have the feeling that we know what we're doing, and we reinforce this by ridiculing and dismissing deviants. The social dynamics that exclude new creative possibilities are incredibly strong, which is one reason why when systems like this do end up succeeding, they tend to be the work of loners, weirdos, or people who have some strange mutation to withstand social pressure. (This by the way is the origin of the "$weird-language is only good for solo programmers" meme, ironically confusing cause and effect.) No doubt other fields work the same way; software is just the one I know well enough for the mechanisms to be obvious to me.
An analogy just occurred to me, which I want to note so I don't forget it. The relationship between a program and the language it's written in is like the relationship between a piece of music and the instrument it was composed for. To say "this system could have been built in some other language" is like saying "this music could have been composed for some other instrument". That may technically be true; music gets transcribed for other instruments all the time, just as programs get ported to other languages. But it misses the most important thing: the creative process by which the music or program got written in the first place.
There are intimate feedback loops between the mind of the composer, the developing music, and the design of the instrument—which possibilities it makes natural/easy vs. which it discourages/excludes. Every instrument and every programming language has a different set of these. They may not differ in what can theoretically be played on them, but they differ immensely in how they organize the space of possibilities—which ones are near at hand vs. out of reach. You can play the same scales on the piano, the cello, and the guitar, but where the mind goes next as it composes a new sequence of notes—not a scale, but a sequence that has never existed before—is deeply conditioned by the instrument it's working with, which is the medium it's growing in. Some next-notes are far more likely than others, and which next-notes those are differs greatly between instruments. In the same way, a program grows by accruing constructs (expressions, statements, forms, types), and the ones that are most likely to get added next are the ones that are most natural and nearest-to-mind, given the program so far. Which next-constructs those are differs greatly between languages.
Since each next-note or next-construct is deeply conditioned by the sequence it's adding to, this effect compounds as the system grows. It follows that, at least for the most interesting and creative systems, a program is literally unthinkable apart from the language it grows in. So much for "languages don't matter"—yet how often that untrue truism is repeated! The reason for this fallacy is that we take a program as if it existed prior to being written, which is impossible.
Seems to me, he's an insider who is in the know and he's made an effort to explain why he sees it that way:
As long as we're on the topic, consider this: the software for both HN and YC was just a single Arc program (and not a large one) for the first 9 years of their existence, during which they went from nothing to massively successful to industry-changing.
I mean, you don't have to agree, but his opinion is probably more informed on the topic than yours is.
I'm trying to write this in a way that does not come across as sarcastic or ungracious, but does it strike anyone else as odd that this change blocks on the next version of the underlying programming language?
Given that both the language and the forum are developed in tandem, and are linked to the degree that they ship together, it wouldn't be surprising that changes in Arc would be made specifically with the forum in mind.
The language, yes[0], but AFAIK the maintainers don't take pull requests.
Arc has a public fork called Anarki[1], which is built on Racket[2]. The Anarki version of the forum differs from the Arc forum, which differs from HN's own custom instance, which is closed because of various YCombinator business reasons.
And to be a little more specific (because I really like Arc), the releases are very rare, and minor at this point. I would be surprised if the Arc that was running HN was the latest released version of Arc, the news library notwithstanding.
Your many posts and emails about this are beginning to resemble harassment. We've given you many lengthy explanations and I've deleted dozens of posts at your request. I've spent hours engaging with you about this, answering questions and objections and explaining HN's approach in deep detail.
As I've explained many times in these conversations, we're happy to delete specific posts and to redact identifying information. What we don't allow is wholesale deletion of account history. You disagree with that—you've said so dozens of times—and this has now become repetitive and your behavior has become abusive. Actually, it became abusive months ago, including with surprisingly vicious comments in email. We try give people the benefit of the doubt and cut them slack for as long as we can, but I don't see what else to do at this point but ban your account and ask you to stop.
Question - is this at the top because you've pinned it there, or because people voted it so? I've been around here for 6-ish years now, and don't think I've seen any pinned mod comments before.
I pinned it. I usually do that for more boring reasons, like linking to a previous submission when an item is a dupe. But sometimes there are admonitions like https://news.ycombinator.com/item?id=23158853 from a couple days ago, and https://news.ycombinator.com/item?id=22827249 from a month ago, that try to steer a thread in a more guidelinesy direction. That works. But I try to do it sparingly.
I follow Dang's comments because I find it interesting how he moderates the site. I have definitely seen comments like this one which always appear at the top of threads, so I'm quite sure they're pinned.
This is absolutely pinned. I prefer to imagine that since we haven't noticed such comments on the regular means this power is not habitually abused to float a particular opinion/angle to the top of threads. That policy is not guaranteed to be enforced, so I'd prefer a visible indication (eg. via an icon, akin to Reddit) indicating the comment has been pinned. Transparency for the win.
Of course, a truly Evil™ company would have both options available: a visible icon for transparently pinned comments, but also the ability to invisibly pin a comment to influence readers. There is no real foolproof method, unfortunately.
I don't know if pinning is even necessarily a chief tactic as I presume they're also able to arbitrarily modify a comment's votes, or at least to overlook manipulation of votes from an entity they've given agency to.
The problem with "by top score" is it would itself influence the scoring, even if only some people used it. The oldest comments would stay at the top of such a list, because they get seen more and thus have more opportunities for upvotes, creating a self-sustaining cycle. You always need a time counterweight.
That would be nice. And also a way to wrap all comments that are replies to first-level comments so that we can see all these and only reaf discussions on projects that interest us.
Sorry, it may not be the right word. I mean the effect that clicking on the little "[-]" does. A way to automatically clic this for all second-level comments, so that only first-level comments that directly answer the "Ask HN" appear.
If you find a place for comments sorting options (what my parent comment initially suggested), it could go there too (it's also related to comments display settings).
Maybe a line between the "add comment" button and the first displayed comment?
I struggled with whether this point is closest to pagination or sorting/ordering and ultimately chose here.
Is there any chance to get client-side thread collapsing?
Use case: suppose I'm interested in reading about side projects and not pagination or I'm "done" reading about side project X and want to get on to side project Y. If I could click to collapse the entire pagination thread (client-side only) and then later collapse project X's thread, that would represent an improvement in experience on this thread. (It's less clear that this applies generally to topics with 50 comments, but over 250, it could help.)
That's a great idea. I don't even think it's on our list. I'll add it.
We have an experimental feature to highlight new comments if you or anyone wants to give it a try - email hn@ycombinator.com. But you'll still have to scroll through the pages to find the new ones.
Consider whether we might be better off without that feature, which also makes it super easy to keep aging threads alive, which is something HN has subtly discouraged thus far.
I've definitely been considering it, but it's also the most-requested feature by the people who've been using the highlighting so far. I think as long as we make it relatively passive, i.e. you still have to scroll to see the new things, it might not be too much of an unwanted catalyst.
Worst case, if it did turn out to have a major negative effect, I hope (I pray?) we would have enough killer instinct to claw it back.
Hi dang,
Honestly I don't know how to say this other than I am so thankful that you take your job so seriously.
I can honestly say that I've never seen you act on HN other than in a positive way, and reading your contributions is part of the delight of visiting this site =)...
Sorry, it's just I can think of so few other sites where my first thought on seeing a mod post is not, "Oh what happened now..." and I just wanted to thank you for all the effort that you put in.
Hey Dang, another side project born right from this post. I've listed all possible links from this post across pages and listed down here. https://born-out-of-covid.f22labs.com/
Let me know your thoughts, will add more details and links after i wake up tomorrow.
Nice! That could form the basis for a future thread. (One observation in case it's helpful: I'm seeing lots of duplicates on that page.)
Edit: while I have you: you've posted several links to that already, plus you posted 10 of these: https://news.ycombinator.com/item?id=23189273. That's not allowed on HN—users consider it spamming—so please don't. It's of course ok to link occasionally to your own work when it's relevant, but not to use HN threads aggressively for promotion. The idea is always conversation, and one can't have a conversation with a commercial.
Hopefully, this will reduce the use of virgin plastic for creating art pieces in 3d printing community and you might be able to create beautiful and useful things out of waste plastic while cleaning plastic waste from the environment.
It's a profitable business.
I worked on this in my free time during quarantine.
I want to make the project more accessible so people around the world can develop local recycling unit. There is lots of work which needs to be done including making parts more standardized, demonstrating how parts fight together in a visual way and also have a microcontroller firmware to control diameter of filament. I don't have much experience with microcontrollers but I've ideas, so we'll see.
Some guys in our city are doing this on a larger scale, schools in the area collect all plastic bottles and in return they get 3d filament: https://www.greenbatch.com/
Wow that's great, I wonder how they modify the PET polymer, as I've been researching it's not an easy polymer to print with atleast not without the glycol modification which is usually sold as PETG.
I’ve been looking into Precious Plastic project. They have a lot of open source parts to recycling plastic. I couldn’t find any proof it was actually cost effective? How did you come to compute it’s profitable?
This is a really cool idea. I originally hoped it was about recycling your own plastic waste into filament for your own home printing, but I guess that's a bit too much to expect.
It's made up of 38CrMoAlA and treated with nitriding gas treatment to improve wear resistance and hardness. And it includes free shipping from china, it's probably 5-10kg. And includes machined nozzle too.
Wood drill has continuous flight depth while extrusion screw has 3 zones. In compression zone, flight depth is gradually reduced to compress polymer.
Other than this, I think you'll have problem with finding reasonably straight wood auger drill.
With precision machine barrel and screw, screw will not touch barrel wall.
You can experiment with wood drill and steel tube but I doubt you'll get consistent diameter filament with it.
If you fancy making cheaper one for small scale/personal use, try this one:
I live in a condo with a concierge service, and I need to order passes for delivery guys with a phone call. Naturally, this got very boring very quickly, so I made a Chrome extension which upon a click connects to a Voximplant app which calls the concierge, receives his voice input, forwards it to Dialogflow, and uses the intent recognized by DF to play back recorded audio tracks of my voice asking for a pass, answering questions or thanking the concierge.
I'm thinking about making the extension intercept the traffic to the website of my favorite delivery services and automatically place the call so the button click also won't be required.
This is my view of the AI dystopia. AI for calling concierge, AI for answering concierge, AI for monitoring the concierge calls, AI for all the pointless things that humans used to do, but expanded and accelerated massively. Humans spending their lives training AI to defeat some new attack or spam bot.
The "AI" part here is a bug, not a feature. I wouldn't have to do it if I had an API for sending requests, or a condo-provided app, or at least if the concierge service used e.g. Telegram officially. Since neither of these is true, I had to resort to a workaround.
Incredible. By the end of the quarantine you'll be able to work as usual up until the point where you casually open the door to catch the concierge with your meal anticipating the fact that he doesn't need to ring the doorbell anymore.
I built a half-pipe with my 10-year-old son! We designed it in Sketchup, did a material takeoff in Excel, and built it with lumber that got delivered to our house. 4’ high, 8’ wide and 29’ long—-and hours and hours of fun. Teaching things like trig and how to use a chop saw and the difference between different grades of plywood to a boy who’s learning-starved since school closed has been one of the more rewarding things I’ve ever done.
Excellent project. Not as hard as a half pipe, when I was 8 my Dad and I built a lumber bike rack. Classic memory, its still stationed in front of the garage having taken quiet a beating. During the summer every night of mine and my siblings - sometimes drunken - teenage buffoonery for years on end ended with us riding/crashing into each slot.
I created https://web.trango.io a LAN based calling and file sharing service. Essentially, you can share files, make video and audio calls to those on the same network as you without having to go through the internet. Your data never leaves your local Wifi. The internet is only used to discover those on the same network as you. We use webrtc and a signalling server to make this system work.
We were in the same office and needed a fast, simple way to communicate with eachother without coming in close contact (covid 19) and wanted to do that over LAN rather than use tools over the internet NOR use our ancient intercom system. So now we are using this internally for fast file sharing and good quality video calls.
Going to be introducing group calls soon and also the ability to integrate online calling and file sharing.
This is a really great idea from the security and privacy point for small teams and offices. And for sharing stuff in a house! Would this work through a VPN?
you could also introduce differnt channels, so different groups of ppl can chat . And you should add text chat through the data channel
It works over some VPN's but not all. We will look into why and try to resolve.
Yes, it works very well when we need to call eachother or share files with eachother in the office. At home, I mainly use it to share large pictures and videos with my wife as it is very fast.
Yes, different channels and group calling should be added. We could add a chat function in it aswell.
Even though it is based on your local wifi and the data never leaves your home/office network, we have still encrypted it by default so we can integrate online calling and file sharing in a secure manner and dont have to redo the whole security aspect again!
Sure. We used webrtc, websockets and a signalling server which helps in discovering who is available on your local network. However, none of the calls or files ever go through the server.
We want to refine it a bit further before taking it opensource. :)
There is no range. Whoever is connected to the same Wifi network can communicate or share files. You could say the range is only as good as the signals from your router.
I've recently written a Python app that selects a random location in an area defined by a user-supplied shapefile [1], grabs corresponding aerial imagery from Google Maps, and posts it as a geotagged tweet:
I've built this tool because satellite imagery can be extremely beautiful [2], and I was looking for a way of regularly receiving high-resolution satellite views of arbitrary locations such as the center pivot irrigation farms of the American heartland [3] in my timeline. Plus, for obvious reasons, it's nice to see the world without actually having to go outside right now.
Currently, I'm running two Twitter bots based on ærialbot:
I think you can make the images a lot more appealing by adding some automated post-processing to them.
I don't know a lot about image processing algorithms but clicking "Auto" on google photos tweaks basic stuff like exposure, contrast, highlights, shadows, vibrance etc. so the image has a lot more "punch".
Good comment – ærialbot actually does increase contrast and saturation a tiny bit, but I've kept this very conservative for a couple of reasons:
* Some areas of the world are just naturally fairly flat and monochromatic, so a dynamic contrast/saturation/brightness adjustment (e.g. one that would turn the darkest pixel black, the brightest pixel white and linearly map the rest between these extremes) would not work for these areas.
* The available satellite imagery has been captured and processed in a variety of ways depending on the region, so a constant contrast/saturation/brightness adjustment might work well in some places, but overcorrect things in other places (especially urban areas in the US and Europe tend to already be fairly saturated and contrasty).
Basically, doing this well would involve a whole bunch of testing and fine-tuning. And since not even Google (the source of the imagery) seems to do this, I decided not to bother: Keeping the data basically the way I receive it is easy and "honest".
I finally put my photos up on my personal website. The only constraint I gave myself was to build a site that doesn’t need Javascript to load.
In the end I ended up using Next.js as a static site generator that pulls all the routes from my directory structure, making it possible to add new photography collections and filters as I go.
Might be overkill for the use case but it was fun to learn. The irony is I had to write a bunch of JS to produce it.
Still need to optimize the image sizes and I am thinking about adding filters for b&w/color/format.
Looks great! I did something similar with hugo and tried to automate the process as much as possible.
I use a utility called jhead to resize, fix rotation issues, and rename photos by date - then I tied this to a folder action on macos so I can just drop photos in a folder and they get renamed and resized.
Then Hugo has this cool 'smart' cropping feature which tries to crop based on content [1] - and the end result is now all I do is drop photos in a folder and publish and it comes out looking pretty good [2].
Nice! I found Hugo about halfway through working on this and it seemed like a great solution as well. The jhead utility would save me a ton of time as I ended up cross referencing my negatives to find processing dates which is all in the metadata.
I ended up using sharp [1] since it was so easy to integrate into my workflow.
Cool project! I set out to do something similar as well, but with a slight different scope. I wanted a web gallery that I can use to access all my (substantial) collection of pictures, that does not need any maintenance/work to get started. It lists directories and generates thumbnails on the fly, no database needed.
No JS would have been nice, but ended up making the content draw and re-flow in JS as I wanted to keep the aspect ratio of the thumbnails instead of showing a bunch of squares, for which a simple flexbox would have been enough.
OP, have you tried loading="lazy" ? I don't know if it works with the picture tag but it is worth trying I think.
Very often, webpages contain many images that contribute to data-usage and how fast a page can load. Most of those images are off-screen (non-critical), requiring user interaction (an example being scroll) in order to view them.
Loading attribute
The loading attribute on an <img> element (or the loading attribute on an <iframe>) can be used to instruct the browser to defer loading of images/iframes that are off-screen until the user scrolls near them.
The HTML <picture> element contains zero or more <source> elements and one <img> element to offer alternative versions of an image for different display/device scenarios.
The browser will consider each child <source> element and choose the best match among them. If no matches are found—or the browser doesn't support the <picture> element—the URL of the <img> element's src attribute is selected. The selected image is then presented in the space occupied by the <img> element.
To decide which URL to load, the user agent examines each <source>'s srcset, media, and type attributes to select a compatible image that best matches the current layout and capabilities of the display device.
The <img> element serves two purposes:
It describes the size and other attributes of the image and its presentation.
It provides a fallback in case none of the offered <source> elements are able to provide a usable image.
Common use cases for <picture>:
Art direction. Cropping or modifying images for different media conditions (for example, loading a simpler version of an image which has too many details, on smaller displays).
Offering alternative image formats, for cases where certain formats are not supported.
Saving bandwidth and speeding page load times by loading the most appropriate image for the viewer's display.
If providing higher-density versions of an image for high-DPI (Retina) display, use srcset on the <img> element instead. This lets browsers opt for lower-density versions in data-saving modes, and you don't have to write explicit media conditions.
Good point I hadn't thought of that. Still have to troubleshoot it a bit in Firefox but it looks like its working in Chrome.
The resolution scaling is a good idea as well. I used the picture tag initially as a fallback for browsers that don't support webp images. More importantly I need to actually create scaled images which I have been putting off...
You might want to add some copyright notice. Even if you don't want to monetize them is better for others to ask your permission rather then lose them to hoarders and later watch them making money and you get null.
I've tried to make it intuitive enough that you don't have to read a page of instructions first but let me know if I've missed the mark. I'm hoping you can learn the gameplay mechanics as you play.
I'm not using any web frameworks for this which was actually fun to do. It gave me a chance to improve my understanding of CSS animations + reflows, and catch up with changes to JavaScript.
Just played with it for a few minutes, I liked it! The only bit that occasionally tripped me up was going diagonally in my word construction, but I got the hang of it after a few rounds.
One thing that may make this easier is beveling the edges of the tiles slightly so you don't accidentally select letters on either side of the diagonal unintentionally.
> One thing that may make this easier is beveling the edges of the tiles slightly so you don't accidentally select letters on either side of the diagonal unintentionally.
Thanks, I'll have a play with that. If you use your web devtools to inspect the HTML, you should see over each letter tile, there's actually an invisible tile on top of each one at a 45 degree angle that's being used as the real touch/mouse target for selecting letters (as accidental selection is awful if you use the actual tile as the target). Maybe there's a more reliable way but playing with the target shapes and sizes will probably help.
Nice, fun. I think a 5x5 grid would be better, at least for me. I can't hold a 6x6 in my head. Or maybe that's inevitable with the constant cycling of letters.
And maybe I wanted vowels and consonants to look different (maybe a subtle-ish color change) so I could see at a quick glance how I'm doing at board management. But who knows if this would be useful in practice. (Actually now I wonder if I can do this myself in CSS...)
Thanks, that's useful feedback! I agree some colour variations would be good for guiding the eye.
I partly moved from 5x5 originally to 5x6 because it looked better on mobile/portrait but then I found in play testing more letters gave people more room to manoeuvre when the orange/bonus tiles start appearing (where you want to delay using them).
Loved it! A few things that confused me:
1. "?" is a wild card, right? (meaning it can stand in for any letter)
2. I once got a diphthong "qu" in a single tile - not sure if it was a glitch?
3. Why is a tile sometimes yellow?
Great feedback, thanks! Sounds like I need to make the game rules clearer but hoping I can do that without explicit instructions somehow (it's a fun design challenge too). Let me know if you've got any ideas.
> 1. "?" is a wild card, right? (meaning it can stand in for any letter)
Yep! It'll autocomplete to a valid letter as you use it - there should be tons of valid 3 letter words so I was hoping people would figure it out with a quick bit of experimenting. Try just making up a long word to see if you can find one for big points (at the cost of the time it takes you to guess).
> 2. I once got a diphthong "qu" in a single tile - not sure if it was a glitch?
Yep, there's no "q" tile, only a "qu" tile to make it easier to spell something. Adding diphthong to my personal dictionary!
> 3. Why is a tile sometimes yellow?
Scoring explanation:
- Word scores directly depend on the length the word and goes up quick e.g. for a 3 letter word you get only 1 point (a deliberately lame amount), then 4 points for 4 letters, 9 points for 5 letters, 16 points for 6 letters, 25 points for 7 letters.
- The exception is yellow/bonus tiles are counted like they were two letters when calculating the word score, so one or more bonus tiles in a word grows the word score massively e.g. a 5 letter word using 2 bonus tiles will get you 25 points instead of the usual 9.
- You get rewarded a bonus tile if you spell a long word (this one for sure isn't obvious unless you play several games).
One of the core strategies is to earn a few bonus tiles with longer words, then make maximum use of them to level up by spelling words using multiple bonus tiles at once.
For the scoring, I was hoping for players to go through this thought process 1) "hmm, 3 letters words only give you a single point" 2) "4 and 5 letter words get way more points, and the score goes up rapidly by the length of the word" 3) "the bonus/yellow tiles boost the score up even faster". Sounds like this needs more work though so thanks.
This was great! Loved it and the time per level felt good as in I wasn't inordinately stressed. Got to 249 the first time but had some scares for sure.
Thanks! Glad the timing didn't feel stressful. I've had some people say they don't generally like timed games but when it was game over here they felt it was fair instead of blaming the game - seems a decent thing to aim for.
You actually get an extra second before your time runs out to increase the chance of "just in time" saves. :)
Really fun, very polished! Stream of consciousness feedback:
After a few levels I found clusters of letters I was having trouble with building up over time (there was an “X” right in the middle, and like six “I”s clumped up). I found myself wishing there was some mechanic that could clear them out. Maybe that would make it too easy though.
I really love the animation when you make it to the next level, very satisfying.
On mobile safari, holding your finger on a tile for too long causes a text selection.
Thanks, stream of consciousness feedback is ideal. :)
> After a few levels I found clusters of letters I was having trouble with building up over time (there was an “X” right in the middle, and like six “I”s clumped up). I found myself wishing there was some mechanic that could clear them out. Maybe that would make it too easy though.
Yep, this seems worthwhile looking at. Maybe smarter letter randomisation would help this e.g. don't allow a new "I" tile in a place that has two "I" tiles already, don't allow clusters of consonants.
> I really love the animation when you make it to the next level, very satisfying.
Great! I thought I needed something flashier so I'm happy this could be enough.
> On mobile safari, holding your finger on a tile for too long causes a text selection.
Ah, thought I caught that. Thanks!
> I’d love a “zen mode” with no timer!
Yep, I want to figure something out for this. It's pretty fun thinking of different scoring mechanics and how this impacts gameplay strategy.
That was fun. I wanted to create a platform where you could drop in a game like yours and instantly enable leaderboards with groups of friends. I have couple of test games plugged in. Check it out.
Wow very well made, I did a similar thing (for kids) couple of years back, but I really like that you can swipe in your version! https://finding-nora.com/
Thanks, it's more work than it looks which you'll know. The swiping code was a bit of a pain to be honest. The web APIs for handling mice and touch events together aren't well unified.
Haha, we have the same confetti effect at the end. Nice work! Is the situation with iOS and PWAs any better since you last worked on it (I saw your note about a few issues on the GitHub page)?
Yes I also wanted to add swiping back then but I guess it’s even harder if you’re using React.
PWA’s are slowly becoming better on ios, there was another new release of safari that fixed a lot of bugs. But it’s still very niche, since Add to Home Screen is so hard to find for users
Okay, I can add them if you let me know of any others. I'm using the same dictionary as Letterpress (which is one of the best openly available ones I've found) with a few additions.
1) Trail Router (https://trailrouter.com) - This is a running route planner that favours greenery and nature in the routes it generates. It can generate point-to-point or round-trip routes that meet a specified distance. I developed this because I am (or was...) a frequent traveller for work, and want to run in nice areas rather than by horrible busy roads when I'm visiting somewhere new. Naturally, the utility of this tool is limited at the moment for people stuck in lockdown!
2) Fresh Brews (https://twitter.com/FreshBrews_UK) - I've been touring the UK's finest craft beer breweries from my own home in recent weeks. New beer releases sell out very quickly and I was frequently missing out. Fresh Brews is a simple bot that monitors the online shops of my favourite breweries and posts when a new beer is released to the shop, or an item comes back into stock.
Super nice work on trailrouter. The several routes it produced for me in Berlin look quite nice, being familiar with the surroundings.
Would be cool to see how you built it, if you put it on github.
I’m curios about building a similar thing for cycling – crazily neither Komoot nor Google Maps let you filter by type of road, and I’d like to select only bicycle paths and roads where cars can‘t go. Even if it means cycling much longer, I’d simply like to avoid cars and in Berlin it’s possible 90% of the time.
I'll probably write a blog post on how it's built though - there's quite a lot going on under the hood!
Supporting cycling is a possibility for the future. I don't think you'd want to absolutely exclude non-cycleways (as it might make many routes impossible), but you could certainly weight very heavily against them and show on the map which parts of the route were dedicated to cyclists vs which were not.
Congratulations on creating trailrouter! This is one of the most unique and useful side projects I’ve seen in quite a while. I had a lot of fun looking at the various suggestions it offered for my neighbourhood, and I could see how this could help people enjoy their neighborhood a lot more.
If you have the time, I’d also love to read a blog post (or even series) explaining how you built this. Your answer on the Graphhopper forum was very clear and makes me think that a more detailed version could be super useful for a lot of people.
Thanks very much for the kind words! I wasn't sure if others would find the technical details of this topic interesting (it's my first foray into GIS work), but it sounds like they would, so consider a blog post in the works.
Thanks for the link, that’s a very interesting read.
Subscribed to you on twitter to get updated if you publish a post on that!
The thing on absolutely excluding – yes, maybe some things are impossible, but in my planning, it’s really about the journey, so even if I have to go additional hundreds of kilometers, so even in terms of extra days, but via cycling ways and tiny country-side streets (without max speeds over 30km ideally or similar) and not see fast moving cars almost at all, which in Berlin/Brandenburg area, for example, seems to really be possible if you plan it manually. Judging by the success of applying your rules on the trails it surely can be done better than I have seen so far.
Thanks, I didn”t know about this one, the tiles they have are really nice. But still no control over avoiding larger roads or what to prefer as a fallback. I still get some larger roads selected, but it’s definitely an improvement over google maps.
I also made the mistake of trying Google Maps for bicycle routes early last year. Searching for a better alternative, I found komoot.com; perhaps you might want to try it.
For my region, I'm pretty happy with it (although it has its issues in places where you have to pass along bigger roads for short distances). It's OSM-based, though, so that might vary.
Trail Router is amazing! I just had it suggest a 5k route to me and it suggested my favourite 5k route immediately. Currently, it doesn't seem to care about elevation, that might be something to look at if you are running out of feature ideas. Is is open source?
I'd actually also be willing to donate a little for the development of such a cool tool.
Thanks! There is an option in the settings menu to "Avoid hills", and in the not-too-distant future that will become a slider that allows you to prefer hills or avoid them.
It's not open source yet, but I might open it up in the future. There's a donate link hidden away in the About page. Any donation would be much appreciated, and would help with the server costs (it needs a huge amount of RAM to store the whole planet's data).
I just had the exact same experience! It nailed the “close to home” routes I do perfectly. Will definitely use this when I want to change it up and run somewhere else in the city.
Trail Router is great! Tested it out for a few cities I know like Stockholm, Gothenburg, Las Palmas and Hanoi and it's really good.
The only thing I noticed is that it seems to prefer going next to water over anything else and have a slight tendency to take detours to run next to very small city parks. Running past a small city park for 50 meters might not be worth the detour.
I tried out Trail Router and found a trail hidden in the woods within 3 miles of my house that I never knew about in the several years I have lived here. Can't wait to check it out. Nice tool!
Just wanted to drop by saying that trailrouter looks great. One small suggestion. I'd add an option to avoid cemeteries as (at least in some cultures) running through one could be considered ill-mannered.
1. Would it be possible to add an undo button for when changing a route goes wrong?
2. I think this would also work really well for planning routes and/or measuring their precise length. In fact, I have been looking for such a tool for ages! One like yours which also allows taking small tracks and paths and not just roads! Unfortunately, currently it's still somewhat difficult to select the precise route as changing it in some place might suddenly change it almost entirely. (Again, an undo button would be nice.) I assume this happens because Trail Router still tries to minimize a certain loss function? Would it be possible to disable that entirely, so that one could "free-draw" routes?
[UPDATE]: Just noticed that one can actually disable all routing preferences. This seems to be doing the trick – very nice!
More love for TrailRouter here. Especially love that you can export a GPX for a watch. I get stuck running the same unimaginative routes so even the ability to be able to have it plan a round trip from my house is amazing. Even more so that it finds out green space. Thanks for this!
Trail Router is incredibly well done. I've had the exact problem that you're describing. I'm currently training for my first 100 miler and I'm getting bored of my regular routes (which hurts motivation) so I'm really excited to try it out.
Trail router looks great. I put my location in and the first three routes were already regular routes of mine , and the next one was one I'll probably have to incorporate. Bookmarking it for when I travel next.
Trail Router is great for my use-case during lockdown. I've been trying to walk different ~3mi round-trip routes during quarantine so as to not get bored and this is very helpful to give me more ideas. Thanks!
TR worked pretty well. Since coming home from e. Europe two months ago, I’ve been exploring the nearby trails on an almost daily basis. Your website suggested the exact 6 mi hiking route that I enjoyed yesterday! When i cycled through the options for other 6 mile options it became less imaginative and more urban street centric. Still, a good interface and helpful as a reference tool. Thanks!
Trail Router looks fantastic! In light of the shutdowns, I've restarted my running / training and I'm planning for a 10 miler in October. This will be a fun way to plan runs and keep me interested. That's always been my big issue - I get bored easily if I see the same thing over and over.
Hi! I love the Trail Router. How did you get the data to create a Strava heatmap? I see the API can download routes by ID, but I'm curious how to do it based on geographic area. Thanks :)
On trailrouter how do you determine safety levels of roads? I had on one of my generated routes a 45mph road and then a few other times it had me cross I-95 (an 8 lane highway here). Overall very neat though. I'd been using mile meter to draw out manually my routes.
There is a setting in Trail Router to "Avoid Potentially Unsafe Roads" which makes it much more conservative about road choices.
It uses OSM data for routing information, but this is quite poor for pedestrian safety (particularly in the US it seems, where some 'secondary' roads are very safe, and others are death traps). There are specific tags for foot access and sidewalks, but they are rarely used outside of cities. Crossing the I-95 should only be done if there's a bridge going over it, it should never take you across the lanes of traffic (!). If it does, please do email me a link to the route if you don't mind.
I'm leaving a comment here as a reminder too! I've been playing with open street map data to learn routing algorithms, I would love to compare notes : D
* Add sliders for setting greenery and hills preferences (so you can _prefer_ hills, rather than just avoid them)
* Add support for preferring particular surfaces (e.g. tarmac vs trail)
* Add support for creating distance-specific round-trip routes that pass through one or more waypoints (at the moment you can only specify a start/stop location for round-trip)
* Sometimes round-trip routing suggests some very complex routes which would be hard to follow when running. Some work could go into simplifying such cases.
* Improve the UI/UX, it's still quite fiddly, especially on mobile
More distant:
* Cycling support
* Native mobile apps (there are already mobile apps for Trail Router, but they're basically WebViews)
* Offline routing support in the mobile app?!
* Direct sync'ing of routes to watches/Strava would be nice, but there's no open APIs for this yet.
If you have any other suggestions feel free to chip in!
Me and my friend have just started something we are calling PomPals, which is pomodoro timer which basically syncs with your friends, so you can hang out during your breaks together.
It is an electron (toolbar) app, which uses WebRTC, so should be fully P2P.
It is too early to use or show, but I did not want to miss out on this thread!
I don't know much about WebRTC or electron, so forgive me if this question makes no sense: if it's a simple toolbar app (such as a Chrome extension) why is electron needed?
Hi! When I say toolbar, I mean in the native toolbar (runs outside the browser) - we decided with this approach due to the fact you may not always be in a browser (coding for example).
This is an idea I had but never executed on: Teamodoro. I like your name better! Hope you can take it to good place so I can try it with my remote co-founder : )
Well, my main side project is the same as it's been for the last couple of years, an animation/vector editing tool written in Rust: https://github.com/logicalshift/flowbetween
It's sort of starting to make the transition between a pile of ideas and an actually useful tool at the moment. The whole idea is to be a vector editing application that works more like a bitmap tool when it comes to painting, so there's a flood-fill tool and a way to build up paths just by drawing on the canvas rather than having to manually mess around with control points.
The way I built the UI is unique too I think. Choices for UI librarys for Rust were quite limited when I started so I built it to be easy to move to different libraries. I don't think there's any other UI library in existence that is as seamless for switching between platforms (or which can turn from a native app to a web app with a compiler flag without resorting to something like Electron)
I suspect that https://github.com/hecrj/iced would now be another UI library that’s as seamless for switching between platforms. Flutter might qualify too (or might not).
I’ve been keeping an eye on the various UI libraries when they come up: right now it seems to take me around a month to add a new one so I’m waiting for one to get traction.
Something else that’s a problem is that as a drawing app, FlowBetween wants to be able to get access to data from a digitizer: pen pressure and tilt in particular. A lot of UI libraries don’t think to pass that through from the operating system, or have an awkward API (browser support is also very spotty for this)
Yeah, lack of support for different input media has been a real pain point for me—most of the developers of these things have mice only, and don’t stop to bother about touch or pen input. I use a Surface Book which has mouse, touch and pen, and I like to use all three forms at various times.
If you’re trying to do touch and pen on non-web platforms, things tend to be very messy if you want to handle all three types of pointers optimally.
But browser support spotty? I find the pointers events API a marvellous abstraction over platform differences, doing the right thing automatically for >99% of cases, and making the remaining cases possible. The only thing I feel it actually lacks is standardised gesture support for touch. I wrote a simple pressure-capable drawing app a couple of years back in the very early days of pressure-sensitivity (back when Edge was the only browser on Windows that supported it, so I targeted Edge only until other browsers got it), and I found it a refreshingly straightforward system to work with. And since then, everyone implements things like tilt and pressure.
So I’m curious to hear what you’re quibbling over, as someone that’s been using this stuff in anger more recently than I.
I suspect some of my experience is now out of date, as it's now spread out over quite some time. The most recent issue I had to deal with was Chrome: when drawing the canvas at high-res it was being a bit slow at blitting some bitmaps and so was running at 30fps. Something is tied to the framerate with the pointer events implementation and so the events also lagged behind, which made drawing on the canvas quite difficult as the display was 250-500ms behind the user. Eventually 'fixed' by turning the resolution down, but it was a real pain finding what part of the application had got behind (FlowBetween being designed not to lag but to catch up when the display can't keep up). That's quite a subtle one and the pointer events lagging is easily mistaken for the frame rate lagging.
Other browsers don't do this, but they've had a few other issues: what I remember in particular - some only support pressure information using the touch API, and some seemed to support pressure information on different APIs on different platforms, so both pointer events and touch events were needed.
All of these are maturity issues rather than real problems with the API, though and I haven't re-checked some of the older issues recently - that Chrome issue was still happening back in January so might still be around, but the others I last encountered over a year ago so may have been fixed by now.
If you haven’t been using it, make sure to use PointerEvent.getCoalescedEvents where available, which unlinks the events from the display frame rate. Anything using pointer events for drawing should use it. (But remember that events can come in at any speed, e.g. a 240fps pen should coalesce four events per 60fps frame—so make sure you can cope with lots of events.)
I believe that the pointer events API is in current browsers now uniformly superior in functionality to the touch events API which it obsoletes.
I like to draw, and I suppose my frustrations with other animation packages that I’ve tried were the main inspiration. It’s quite nice to have something that combines two hobbies into one.
When I started I picked Rust because I’d been learning it and wanted to try using it on a more substantial project. I’m very happy with it as a choice of language: it definitely has a difficult learning curve especially with the way borrowing looks similar to references in a garbage-collected languages but works very differently. However, it’s a very expressive language: something about it makes it very easy to write code quickly that’s still very easy to follow later on.
Awesome. I dabbled with Rust several years ago, but have been thinking about diving back into it... Do you have any recommendations about where to start?
The official Rust Programming Language book is excellent and was all I really needed to learn the language. I had a small project to work on that I didn't mind rewriting after my first attempt (my build server is a NUC and I wanted to write some software flash the LEDs on the front to indicate build state)
I suspect everyone goes through a phase of hating borrowing when learning Rust: it's helpful to know that it's something that eventually 'clicks' and really stops being an issue. It didn't exist when I was learning, but 'Learning Rust with Entirely Too Many Linked Lists' looks like it would have helped a lot.
I've spent a decent amount of time learning over the time that this quarantine has been going on.
A major issue that I've seen is that of most beginner-focused educational content not being fast enough to learn with for the more experienced developer. This along with the fact that time is often a big issue for us. I've had numerous times where I had to learn a new framework within a 1-2 week time span in order to plug some work gap or speed up a project, and found no legitimate resources that could allow an intermediate developer like me to learn faster.
This is why I am currently creating content targeted specifically at intermediate to advanced developers and teaching new languages and frameworks (using the 'constructivist' method) in a way that makes the process of learning them much more efficient. In short, faster.
It's a little rough around the edges but you can check out the blog where I share my current tutorials here: https://fromtoschool.com.
(We intend to get rid of pagination once the next implementation of Arc is ready.)