Hacker News new | past | comments | ask | show | jobs | submit | vallode's comments login

Looks like the website is built with Laravel[1] using Livewire[2] (Alpine JS on the front-end) and the UI library used is Flux[3].

[1]: https://laravel.com/

[2]: https://livewire.laravel.com/

[3]: https://fluxui.dev/


Agreed, missing opportunity to be able to change a url from github.com/cyclotruc/gitingest to gitingest.com/cyclotruc/gitingest and simply recieve the result as plain text. A very useful little tool nonetheless.


Yeah I'm going to do that very soon with the API :)


Oh! Something I took a part in on HN. That's a first. Almost everything there was practical. Highly recommend checking out all of Max's work, beaming with creativity.


So its not mentioned on the post but is this your actual passport photo that was accepted and used and you have it on your physical passport right now?


While this may have not been done, I don't see a reason why these wouldn't have been accepted. Source: I am a certified passport photographer.


how do you get certified to be a passport photographer?


In the US, anyone can take the photo, including yourself.


I did this. It's surprisingly hard to find a solid white background and get uniform lighting at home. Took many shots.


In the U.K. people used to go to a booth, but nowadays you just get a well lit white wall and take a selfie on your phone.


Fifteen years ago I did my own in Canada, and just wrote my own name and phone number on the back as at the "photographer". They gave me the hairy eyeball at the passport office though but let it slide since the pics did meet the requirements.

After that I got them done at the local framing shop.


I'm not sure anyone tried to actually use it as a passport photo. Would have been a great touch though.


It's actually a really exciting thought...


Would that even work? Are you not in Europe, where passport photos are taken on location?


> where passport photos are taken on location

Europe is not a single thing and that statement is not correct.

I'm in Estonia (which is in the EU) and you can either submit a picture online or take the picture on location.


An oddball question, but do you have that government document/card that also works as a smartcard to create digital signatures? Does that get used typically in interactions with the government (or maybe even businesses)?


Late answer but just a note that if you're interested in the tech aspect of it, then the Estonian ID cards implement the IAS ECC spec for all the public key stuff:

> The application enabling PKI functionalities in Estonian eID Documents is IAS-ECC, a sophisticated but standardised solution conforming to CEN TS 15480-2 (European eID) with extra features.


Not gp, but a resident:

    > do you have that government document/card that also works as a smartcard to create digital signatures?
Yes. All ID and residence cards in Estonia include an embedded certificate pair for login (via PIN1) and sign (via PIN2).

    > Does that get used typically in interactions with the government
ID Cards, SmartID and MobileID are the only ways to login to any government system or bank. (Some banks also have PIN calculators).

Extra info:

Instead of ID cards, on a daily basis most people use SmartID (same as ID cards, but as a mobile app) or MobileID (same, but embedded to the SIM card) for auth operations.

Many computers in the government, hospitals and schools have a keyboard with an ID card slot and users can (or sometimes are required to) use their ID cards to log in.

There's also a free-software DigiDoc4 app available for Desktop and Mobile, which allows users to sign or encrypt any document or folder for free, using one of the 3 authentication methods mentioned above. You can use it to sign contracts like rent or business.


In both of the two European countries I've been involved in a passport application for, we had to bring photos along, which we got taken by a photographer in a copy store. There was no certification of the photographer involved that I'm aware of, just the usual list of requirements for the photo that they had to follow.


In Germany and Japan, you bring one. It wouldn't be an issue if it fit the biometric spec.


I'm in Europe and mine sure was not taken on location. Had it done in a mall, and they sent it electronically to the police.


from the 3 or 4 docs i've had made within 10 years requiring this specification, only once was the pic taken on location


In Britain you just upload a digital photo so it would work here.


How odd, there's no verification if it is your photo


There is now, there's a system where you use a webcam to do a live facial-recognition check to verify your identity - with a for profit business (because that's what Tories do, make ordinary parts of government into a way to pay out private profits).

That only confirms the person sitting at the computer is the person in the uploaded photo though.

When you first get a passport you have to have your identity confirmed by a professional person with community standing, teacher, policeman, doctor, someone like that.

They do background checks, it seems quite rigorous.

Once you have a passport/driving license they allow you to reuse a recently verified picture in your application to get the other document.


Don't know about Britain but the US also allows passport renewals by mail, so they can't check the photo against your face but they presumably can check it against your previous passport photo.


how did you come to take part?


It was fairly random, someone in my network had mentioned that Max was looking for people to take part in the project and I reached out. I was given a date and time slot and that was that.


Indeed, many applications I would expect to prevent sleeping (some audio playback ones, games, etc.) don't implement this. I assume it's a case of Apple's APIs changing over the years and not everyone catching up/caring. At one point I had downloaded Amphetamine[^1] but it is much nicer to just use the terminal here.

[^1]: https://apps.apple.com/us/app/amphetamine/id937984704




I thought the fact it is running from a Lisp script, uses Hunchentoot, and uses XSLTs for it's feed pages already puts it in some niche category of "huh, neat".


The slides from the Spark AI 2020 summit [1] helped me understand this a bit. If I get it correctly, the premise is that a specific format is used to organise information into efficient blocks where related column data "lives" closer together, enabling faster read speeds but worse write speeds.

If someone has more resources on the topic, I'd be very interested. There are many applications where sacrificing data freshness for a considerable uptick in performance is alluring.

[1]: https://www.slideshare.net/slideshow/the-apache-spark-file-f...


> enabling faster read speeds but worse write speeds

Write speeds will probably decrease, because the main organization is extremely optimized.

But the goal is to speed-up queries of the type of "select sum(price) - sum(tax) from orders" at the cost of queries of the type of "select * from orders where id = 1".


This lecture series on columnar storage formats and querying them is great. https://youtu.be/1hdynBJo3ew?si=5KfT_2qpUFQmy_uL


Thank you!


I'm not sure how I feel about the content of this entire post. RSS feeds are a more or less stagnant technology, mostly adopted "for fun" or by a niche of people who find them useful. The only way I can see to move forward is to make them as easy and painless to use as possible, the onus falls on both the creator and consumer of feeds (the websites and the clients... the user is completely out of the equation here in my opinion).

This kind of attitude reeks of the "you're using it wrong!" of the Linux world. Are you really telling me you are getting enough RSS feed requests to put a dent on your tech stack? Is your bandwidth overhead suffering that much (are you not caching?)? Make it painless and let's be thankful it isn't all web scrapers masquerading as users.

Mind-boggling problem to be angry about.


Some of us are not serving our blogs through Cloudfare. In fact, I'm using a 5 year old entry-level Synology NAS located in my apartment to serve mine. I do that because it's already always-on by policy and therefore doesn't cost me anything extra to serve my blog from there, besides the DNS domain name.

Badly behaved RSS readers that download feeds uncached are wasting orders of magnitude more of bandwidth and CPU (gotta encrypt it for HTTPS) on my end than well-behaved clients that get served 304s. Some of them don't even set "Accept-Encoding: gzip" and download it uncompressed. That's hundreds of kilobytes per request wasted for nothing.

My blog doesn't see enough traffic to make this an issue for me, but I can see why this could be a real problem for popular blogs with loads of traffic.


I think it's generally a poor idea to serve a blog, especially a popular one with loads of traffic, from an entry-level Synology NAS.


I think it's generally a poor idea to require spending multiple hundreds of dollars to have your website on something you control.


It's kept up-to-date automatically. It is appropriately firewalled to only expose ports 80 and 443, the administration panel and the rest of the services hosted on it are only reachable from my LAN. It only serves static content, no scripting language is enabled.

A determined attacker might be able to get in despite all of these precautions, but it's at least administered in a somewhat responsible manner. I highly doubt most servers exposed on the internet are.


Right, and if you stuck that behind (eg) Cloudflare via cloudflared, it'd be faster for your readers, more secure for you (no direct access) and have no impact on your network or NAS's resources.

Right tools for the job. It's not a failing to use a cache.


You don't even need cloudflare! If your blog is only updated infrequently (>1/day), serving what is essentially static text should not be difficult.


I can't wait for the day clownfare suddenly put a price on this stuff they've been giving everyone for free. I find it hilarious people who get 200 hits a day on their blog think they need it.


That's right but I'm not making an argument of necessity.

I'm saying it's better for both you and your users to keep network traffic at a proper CDN than a home ISP network. It'll be faster for everyone.


except cache is broken on most browsers thanks to https

still better than giving up to cloudflare


This is the second time today I've read a HN comment saying that HTTPS and caching are incompatible.

Can you explain what you mean by this? I don't understand what you're saying (it seems obviously incorrect?)


visit any page which should be cached. click any link. unplug. click back. dead end instead of cached page.

cache only works on https for extra assets. which is useless if you have a simple static site anyway


I just tested it. Can't replicate.

Went to https://example.com, checked that it's cached in the inspector, clicked on the More information link which leads to https://www.iana.org/domains/example, unplugged my connection (went offline), clicked back. It showed the cached https://example.com. I clicked forward. It showed the cached https://www.iana.org/domains/example page. Clicked back/forward like a maniac, the pages switched seamlessly.

Repeated the same process on a non-cacheable page, it showed a dud when disconnected as expected.

Tested on Firefox and Chromium.


What? No it isn’t.


I think it's a shame that the web has become so bloated with frameworks and images and video and ads that you _can't_ easily serve a highly trafficked website from a ~20Mbps connection. It shouldn't need Clownfare etc.


> RSS feeds [would be] a more or less stagnant technology, mostly adopted "for fun" or by a niche of people who find them useful

Pray tell, what would be the good alternative, for this «niche of people» who collect the news? We use RSS because news are collected through RSS.

And clients have to be properly, wisely configured: some publish monthly some much more than hourly; some are a feed per domain, others are thousands of feeds per domain... This constitutes a set-up problem.


RSS is the best option if you are looking to avoid walled gardens. It's a great way to find content without search ads or social media ads. RSS readers put you in control, instead of algorithmic social network feeds that manipulate you into doom scrolling. I think RSS has been growing as the social networks enshittify.


This is delusional. Are you not able to identify when a thing that you do isn’t very widely done?


> An odd interpretation of "democracy" is lurking according to which "my ignorance is worth as much as your knowledge"

~~ Isaac Asimov

> An odd interpretation of "democracy" is lurking according to which we should look at masses to take example, instead of warning

~~ mdp2021


Delusional over what?

Why should "wide adoption" be relevant? We are already very well informed (much too well informed) that "people" are """odd""". What are your assumptions?

If you need to drive a screw, and few people used screwdrivers - who cares? You'd still use screwdrivers even when people tried to use cakes or monkeys or simply stopped driving screws, would you not?

You already expect "people" to use cakes or monkeys for something when they would normally be expected to drive screws - actually, you expect to be surprised with much worse ideas becoming realities. So?

Screwdrivers remain relevant, and using them properly remains equally relevant. And especially so, when you note that people are there with loose parts stuck together because the cake smudged monkey was not precise!


I see very little in the article that's actually targeted at people that hold it wrong. My interpretation is that the rant is targeted at RSS service developers who should know better, and for whom you are inventing excuses to justify laziness or incompetence.


It's a bigger issue than RSS, really. Fetching RSS is a simple GET request. It requires the most basic understanding of HTTP, and people still can't do it right: they don't know how to deal with standard headers, how to deal with standard response codes, how to send proper requests etc.

Do you think regular REST API calls to any other service are any different?


Any pointers for good resources to grok best practices?


Not sure about best practices, but these two resources are a good reference point:

- Know Your HTTP Well: https://github.com/for-GET/know-your-http-well

- HTTP Decision Diagram: https://github.com/for-GET/http-decision-diagram


Cache-Control goes a long way in the right direction: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ca...


Very valid point, the frustrations here do share commonalities with the overall HTTP ecosystem.


So you could be angry at web scrapers, but not at RSS readers making badly formed requests every 5 minutes?

Making it painless is letting readers use the options available, supporting the commonly used standards and maybe fixing small problems (I'd probably space trim that URL for example). It doesn't mean supporting systems that are sending poorly formed or badly controlled requests, regardless of the impact it has on your tech stack.


Conspiracy theory (that I genuinely believe): Consumers don't know what they want, and in reality they would love RSS if it was allowed to blossom. But, RSS readers make the web unmonetizable, so they've been actively destroyed from every angle, in favor of the enshittified feeds the world is now addicted to. There are so many ways to quietly bury a certain technology if the market incentives are stong enough. You can't algorithmically manipulate and addict someone who just follows accounts chronologically without ads. So silicon valley market forces tend to discourage RSS. Google killed their RSS reader once they realized browser-based feed browsing generated way more adsense profits. Other readers get VC investment and mysteriously their free version becomes unusable garbage and adoption plateaus. Not every controlling interest in a VCs portfolio is benevolent. Facebook, twitter, etc all make it against their terms of service to "scrape" your own friends privately shared posts - you can only see them via the walled garden of ads. Apples app store fees would drop if consumers understood the utopia an RSS based web has the potential of being, instead of a dozen addictive apps. But it was too free. RSS derails trillion dollar roadmaps. The incentives are clear, and silicon valley knows it.


It's like pissing in the street. No one gets hurt and the street isn't going to break, but there will be a smell and it's perceived as a rather rude behavior except in the case of animals and small children.


The downvotes remind me of a thing at a job. We had an API for programmatic access to our customer's data, and one customer had bought some expensive BI-solution that they wanted to feed with data from the system we provided. For months they came to us and complained that the API was broken and asked us to fix it.

When I looked in the logs I could see that they hit the API with a lot of requests in short succession that generated 403 responses, so we said that they need to look at how they authorise. After a while they returned and claimed our API was broken. Eventually I offered to look at their code and they were like 'yeah sure but your silly little PHP dev won't understand our perfect C# application'.

So I looked at it and it was a mess of auto-concurrent, nested, for-loops. If they had gotten any data out it would have been an explosion of tens of thousands of requests within a few seconds. They also didn't understand the Bearer-scheme and just threw the string they got from the auth-endpoint straight into the Authorization-header without any prefix. Maybe we should have answered with 400 instead of 403, but yeah, that would have been a breaking change and we didn't want or have time to do a new API version because of this.

Anyway, their tech-manager got really mad that I found the issue within an hour that they had struggled with for months, and also had mentioned that the API-adapter was designed for DoS rather than a polite API-consumer and maybe they should rewrite it to be less greedy and maybe also use ranges instead of crapping out a new request per row in in a response and stitching it together again on their end.

A few weeks later they got it running and it was brutal, but our machines could take the load so we didn't think more about it. Later I heard they got performance issues on their end and had to do as I had suggested anyway.

Be polite and pay attention to detail when you integrate with protocols and API:s. At best you'll be a nuisance if you don't, but many will just block you permanently.


Thank you! this is a more interesting comment than the pissing example


I'm happy you liked it. Didn't think of it as hypothetical, since it happens a lot.

I'm a simple person, I often prefer the succinct, crude analogy over telling a story until asked or provoked into telling one.


I liked your story AND your analogy.


> Are you really telling me you are getting enough RSS feed requests to put a dent on your tech stack?

Both RSS feed providers and reader maintainers believe that RSS isn't dead, it's just pining for the fjords.

They've got to keep their readers efficient ready for RSS to rise again - much like Christians have to resist the temptation of sin ready for the second coming of Christ.

/s


The paper[1] seems to imply agitation is exactly what this method is promoting: "Furthermore, acoustic streaming induced greater mixing and enhanced mass transfer during brewing.". I assume the 100W of ultrasonic energy would be pretty hard to reproduce by just shaking your cold brew container though!

[1]: https://www.sciencedirect.com/science/article/pii/S135041772...


Very interesting, and I missed the 100W bit, thanks! Yeah that would be really hard to do by hand for 2 minutes. Maybe Guinness records needs to see who can shake their cold brew hardest/longest. So this begs further questions for me, like can I shake with 10W for 30 minutes, or 10 minutes, or…? Does the frequency matter? Can we use one of those chem lab agitator machines to cold brew?


I find it interesting that 100W for ~120 seconds is ~0.3kcal which for a 100ml cup is ~3C. They are right at the limit of power to flow rate. Much faster flow and presumably the cavitation wouldn't "brew" enough, while much slower and it would warm up the coffee noticeably. I'm doubtful the frequency matters much if the cavitation is what is causing the mixing since those are just bubbles emerging and popping, but the efficiency of coupling from ultrasonic wand to liquid could change a lot.

Since you could presumably put 2 of these in parallel and have 2x100ml cups in 2min with 200W without changing the recipe (or 1 cup in half the time), this seems pretty scalable with increased cost and area.

Unagitated cold brew is in the 10hour region, but with agitation/pump through it seems like you can do 8 cups in 20min which is almost as fast as the cavitation method. I suspect the grind size starts having really big effects here.

https://instantpot.com/products/instant-cold-brewer


Yes, it's the frequency and also the amplitude that makes it faster. One could use a lab agitator but it would still be too slow. I think if you pour the water into an ultrasonic cleaner along with the coffee and filter the mix you might get the same result.


That's precisely the thought which has taken hold of me now as well.

The ultrasonic cleaner is always stashed away somewhere safe for its once-a-year use ... I think can't resist to experiment.


I think the size of the vibrations is important here. The paper mentions acoustic cavitation, which I believe would only really occur at small frequencies like the ones stated in the paper, not large shakes that you or I would do.


No humans walking around makes me sad :( A really cool project, albeit worrying that 50% of the real estate of Infinitown is roads only a couple of cars drive on ;) When zoomed in all the way I wish there was some LOD meshing to provide longer view distance, would make it feel much more immersive in that sense.


A handmade ever-growing version that has plenty of people in it is this one which I recently stumbled about.

https://floor796.com/


Sometimes I think I have plenty of patience, reasonable talent, and a strong drive for excellence.

But then I see people doing things like this.


This is very cool but not infinite which is understandable because specific things is used.


Neither is infinitown, but I get what you mean.


I find it kind of interesting and cozy to “peoplewatch” and this really scratches that itch.


Pretty much like the real world in modern cities then…


Grid layout, only cars around. Yep, the USA


*modern North American cities

ftfy


If only… I live in France, and lived a bit in the UK and suburbs are like that in Europe too.


Features like this desperately need to be an AI-curated but human-driven operation. i.e an AI can aggregate some headlines from the latest news but a human needs to be the one that says "good to go" or "pass" on each. It's not shiny, but it would work (presumably better).

This just proves (to me) for the umpteenth time that AI can help us with our work immensely but company after company refuses to hire people who work well with it and instead focus on trying to employ it in a myriad of different unsuitable functions.


It makes no sense to summarize the news from the content of tweets that are about the news, no matter how it is done. You cannot produce a correct output from the wrong input: garbage in, garbage out.

AI summarizing the correct information could work with minimal human supervision.


Fully agree. The idea here doesn't seem to be to create quality content, but to satisfy people's itch to be informed about the world without actually paying a reputable source.

Next: Create real-time reports of sports events by just combining "hearsay" from random people on Twitter into ever-evolving "factual" news. Maybe with a user-option to select which team you're rooting for, so the hodgepodge of hearsay is automatically curated to create a favorable outcome for you personally...


News aggregators already exist. What value add from AI are you thinking of? As a user, I don't care about getting news 30 minutes faster. I also don't care if news aggregators miss out on some stories that AI would catch. I already don't use news aggregators because it's too much news.

The problem that needs to be solved is noise reduction. I want more thoughtful and original content. I want to slow down my media consumption. If someone could build a media aggregation/filtering product that serves me instead of advertisers, I'd use it, but AI wouldn't be the point of that product, just a means to an end.


>News aggregators already exist. What value add from AI are you thinking of?

Not parent commenter, but: Lower payroll figure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: