Hacker Newsnew | past | comments | ask | show | jobs | submit | zahlman's commentslogin

I wonder: what's the least that could be removed from CSS to avoid Turing-completeness?

Does this concept of "personal blog" include people periodically sharing, say, random knowledge on technical topics? Or is it specifically people writing about their day-to-day lives?

How would I check if my site is included?


You can check: <https://github.com/kagisearch/smallweb/blob/main/smallweb.tx...>. I can see that your RSS URL is listed there.

But it currently does not appear in the search results here: <https://kagi.com/smallweb/?search=zahlman>. The reason appears to be this:

"If the blog is included in small web feed list (which means it has content in English, it is informational/educational by nature and it is not trying to sell anything) we check for these two things to show it on the site: • Blog has recent posts (<7 days old) [...]"

(Source: https://github.com/kagisearch/smallweb#criteria-for-posts-to...)


Why would you only include blogs in your small web index? That must be a minute fraction of what is out there?

I can't think of a single blog that I read these days (small or not), yet there are loads of small "old school" sites out there that are still going strong.


> Why would you only include blogs in your small web index?

I am not associated with this project, so this would be a question for the project maintainer. As far as I understand, the project relies on RSS/Atom feeds to fetch new posts and display them in the search results. I believe, this is an easier problem to solve than using a full blown web crawler.

However, as far as I know, Kagi does have its own full blown crawler, so I am not entirely sure why they could not use it to present the Small Web search results. Perhaps they rely on date metadata in RSS feeds to determine whether a post was published within the last seven days? But having worked on an open source web crawler myself, many years ago, I know that this is something a web crawler can determine too if it is crawling frequently enough.

So yes, I think you have got a good point and only the project maintainer can provide a definitive answer.


Being able to configure your system to type the characters really doesn't solve the problem. In particular, if you get data (including metadata such as filenames) from someone else, you need to recognize the characters, both to do the configuration and then actually type them. And characters are not glyphs. There are all kinds of cases where simply looking at something doesn't and can't tell you what characters are in it.

Plan 9 also came with a utility command called “unicode” which helps analyse Unicode strings (get code point etc).

Elegant weapons of a more civilised age…


There is a portable unicode¹ tool available, and it is packaged in a bunch of distributions. I'll spare us the full output, but "unicode a→z" produces something like:

    U+0061 LATIN SMALL LETTER A
    UTF-8: 61 UTF-16BE: 0061 Decimal: &#97; Octal: \0141
    a (A)
    Uppercase: 0041
    Category: Ll (Letter, Lowercase); East Asian width: Na (narrow)
    Unicode block: 0000..007F; Basic Latin
    Bidi: L (Left-to-Right)
    Age: Assigned as of Unicode 1.1.0 (June, 1993)
    …
    U+2192 RIGHTWARDS ARROW
    …
    U+007A LATIN SMALL LETTER Z
¹ https://github.com/garabik/unicode

Emacs has a built-in M-x describe-char that prints out similar information and also this useful tip:

    to input: type "C-x 8 RET 2192" or "C-x 8 RET RIGHTWARDS ARROW"
However in the case of trying to type the name of a file in a shell, that has some weird unicode character in it, just copying the character is faster than to first identify it and then use some clever trick to type it. It can be useful to know for some small number of weird symbols how to insert them, or to use C-x 8 RET (followed by TAB-completion) to find symbols, but I almost always stick to what is available on my keyboard, and often only a small ASCII subset of that, to keep things simple.

It takes all kinds, I suppose.

If you have enough detail for a blog post I'd heartily encourage you to submit it.

I actually had one a while back but it became too taxing to keep it up to date. I've covered much of this stuff on HN though.

"Developers" here clearly refers to the entire organization responsible. The internal politics of the foo.com providers are not relevant to Foo users.

I agree except for your definition of "developers". I see this all the time and can't understand why the blame can't just be the business as a whole instead of singling out "developers". In fact, the only time I ever hear "developers" used that way it's a gamer without a job saying it.

The blame clearly lies with the contradictory requirements provided by the broader business too divorced from implementation details to know they're asking for something dumb. Developers do not decide those.


This site more or less practices what it preaches. `newsbanner.webp` is 87.1KB (downloaded and saved; the Network tab in Firefox may report a few times that and I don't know why); the total image size is less than a meg and then there's just 65.6KB of HTML and 15.5 of CSS.

And it works without JavaScript... but there does appear to be some tracking stuff. A deferred call out to Cloudflare, a hit counter I think? and some inline stuff at the bottom that defers some local CDN thing the old-fashioned way. Noscript catches all of this and I didn't feel like allowing it in order to weigh it.


Writing the code can definitely feel like the bottleneck when it's a single-person project and you're doing most of the other hard parts in your head while staring at the code.

> anything associated with 'vibe' feels inherently unsecure.

Only "feels"?


That wasn't being claimed, just proposed as the direction we're headed.

Another user had already written what I had in mind when I responded to your comment..

https://news.ycombinator.com/item?id=47387570


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: