Hacker News new | past | comments | ask | show | jobs | submit login
I miss the programmable web (2021) (matt-rickard.com)
248 points by rckrd on July 30, 2022 | hide | past | favorite | 177 comments



    I miss: bookmarklets and user scripts
I use bookmarklets all the time. I can't imagine living without them. For example, this is how I read HN:

https://twitter.com/marekgibney/status/1551483561621979136

The bookmarklet takes me to the latest HN post I have not seen. When I click it again, it takes me to the next I have not seen. As soon as I see a post I have seen before, I know I am up to date.

I can't imagine reading HN any other way. Because otherwise, I would have to skim the whole front page to see what's new.

It also works on threads. So, when I post something on HN and want to see the latest replies, the same bookmarklet does this for me.

I have bookmarklets for every site I use frequently.


I use then to automate annoying workflows for FE development. Sometimes you need to click through a series of things to get the UI into the state you need. A bookmarklet is a perfect solution.


Check out Facebook’s Storybook if you haven’t already. It’s life changing when you don’t have to click through an app to get the state you need.


Storybook is awesome. Just wanted to clarify is has nothing to do with Facebook or Meta.


It is also trivial to turn storybook into visual regression tests.


Do you have a link? I have a really hard time googling this.



  javascript:if (typeof e!=='undefined') e.style.background='#ccc';
  e=Array.from(document.querySelectorAll('.age%20a:not(.s)')).sort().at(-1);
  e.style.background='#ff0';e.classList.add('s');
  e.scrollIntoView({behavior:"smooth",block:"center",});{}
Looks like the code got mangled in the Twitter post, in particular the curly braces at the end. And the line order, "e=" should be at the beginning logically.

  javascript:e=Array.from(document.querySelectorAll('.age%20a:not(.s)')).sort().at(-1); if (typeof e!=='undefined') { e.style.background='#ccc'; e.style.background='#ff0'; e.classList.add('s'); e.scrollIntoView({behavior:"smooth",block:"center"}); }


It is all one line. Yes, Twitter formats it in a way that makes it impossible to copy just the text. Because it thinks that .style is a toplevel domain and #ccc is a hashtag.

Here is a repo from which it is easy to copy:

https://github.com/no-gravity/hn-bookmarklet


Ah that's nice, thank you. Sorry I mis-rewrote when I tried to correct what was copied from Twitter. It addeded some extraneous "https://" too.

  javascript:if (!location.href.match('ycom')) {location.href='https://news.ycombinator.com'};if (typeof e!=='undefined') e.style.background='#c0c0c0';e=Array.from(document.querySelectorAll('.age%20a:not(.seen)')).sort().at(-1);e.style.background='#ff0';e.classList.add('seen');e.scrollIntoView({behavior:"smooth",block:"center",inline:"nearest"});{}
I see, that {} at the end is intentional, as you described in another comment.

And accessing the variable e before assigning it a value, now I see that probably serves a practical purpose too.


Did you write this? If so, why did you add the "{}" at the end? Also, can't you use newlines in bookmarklets?


Yes, I wrote it.

The {} prevents the code from evaluating to a value that the browser then would use as the new value for the DOM, overwriting the existing page.

Try bookmarking javascript:123 and see what happens when you click it.

Bookmarklets are URLs, so a newline in the code would be urlencoded as %0A. So you could do:

javascript:alert(1)%0Aalert(2)

But I would prefer:

javascript:alert(1);alert(2)


This has me really confused. The example you shared in that tweet is very misformed code that I cannot get to run anywhere.



That looks great. Is there something similar for comments ?


The HN structure is the same for top level posts and comments. So the bookmarklet works for both.


last I checked archive.is had killed its bookmarklet, which is very annoying

haven't checked in a while though, maybe there are workarounds


I thought this would be about the glory days of “Mash-ups” using web APIs. Flickr + Twitter + Delicious tags to create a new experience. 2007 ? What a period to be a hacker!


If we could *force* all the walled gardens to have an open API again, like some of them had back in the day, that would solve soooo many issues.

Make all the data free to use for everyone, while ML techniques get even more accessible, and let people build products with that - where they also have to provide APIs for their enriched data.


Even at the time, html scraping was necessary for a lot of things. And lots of services that has open APIs requires access keys. I'm allowed to read your twitter feed in html anonymously, but for json I need to apply for an account in advance and login, for example. Even if we could force open APIs, it's not sufficient.


We can, via policy and regulations! EU would be your best bet for lobbying, as Open Data is already mandatory for public data, and an extension to some privately produced data is not out of the question (it was discussed for transport data for example, ie. Uber data)


I suspect part of why they closed open APIs down is because it was abused (for spam or similar). Not sure how to convince them to open it back up in light of that.


Require registration of an account tied to a real person or organization for access to data.


Oh those were the days. Yahoo pipes and all of the various mashups. Then the walls started closing in :(


And that is not dead either, in fact it is better because you can just self-host https://nodered.org/


My most used bookmark is this, for speeding up the first audio or video element on the page. I suppose that's if you don't count me typing 'n' and auto-completing my way to hacker news.

    javascript:(function() { (document.querySelector('video') || document.querySelector('audio')).playbackRate = (+prompt('How many times normal speed?') || 1); })();


For webkit based browsers, you can also use Video Speed Controller extension

https://github.com/igrigorik/videospeed


On YouTube I've had a great experience using Sponsorblock which crowd sources categorizing different parts of the video. If a bunch of people categorized the first 10 seconds of a particular video as "Intro", that section of the video scrub/seek bar is highlighted in a color that represents that category. I have Sponsorblock set to skip Intros, so I never see them unless it's a very very fresh video that nobody has categorized yet.

It's baked into YouTube Vanced on mobile, which was sadly taken down by Google. Smart Tube Next is another Android app with this feature, but I'm not sure if it's available for anything other than TV set top boxes.


Here's the fork that added Firefox support (https://github.com/codebicycle/videospeed, https://addons.mozilla.org/en-US/firefox/addon/videospeed/), and it's saved me so much time it's absolutely ridiculous at this point.


But a bookmarklet saves you from having to run an extension at all times.


It's become harder to develop stupid simple bookmarklets because modern web pages are trickier to reverse-engineer, the market has expanded by a lot, and now people can make working on handy WebExtensions their full time job, and those are nothing more than bundled userscripts and bookmarklets that have access to some bonus APIs.

It used to be browser extensions were only used by a select few power users, if you don't count those installed by malware/shitware. Now it's common to see folks who only use their computers for Instagram and Youtube having all sorts of browser extensions installed intentionally, and use regularly. And they're essentially just bookmarklet/userscript bundles with access to some special extra APIs.

What's more, with the proliferation of Electron in desktop applications, now even more of the UIs we use are programmable.


I miss the 3D web. In the late 90's Silicon Graphics machines came with an out of the box experience delivered by Netscape Navigator. The page was rendered in frames, so you only downloaded the navigation frames once, not with every click.

Some of this navigation and some of this content was in VRML that you could interact with. The full product tour could be explored in a 3D world. The UI UX did not seem OTT. It felt like the future.

It turned out that the excellent CosmoWorlds VRML authorship tools were just a little bit ahead of their time and there was little of the expected interest. We ended up with a flat 2D web rather than a 3D web. Sure this makes sense but the tools you have available frame how you solve problems.

Also out the box was a web server. You put HTML (and VRML) in your shared web folder and your machine instantly had your stuff available on the local network and you could route to the machine from the internet if you had the networking skills to set that up.

It was all in the original Tim Berners Lee vision, you could imagine how this would make sense in academia, with academics in a department able to put all their work/knowledge on the web very easily.

In this SGI version of the 3D web it was the javascript that glued it all together, VRML was made useful with scripting which could go across those frames to put together an awesome web experience, rose tinted spectacles accepted.

We aren't even thinking along those lines these days. The web has gone, we have platforms now.


There's been good webgl & webgpu work. Whats been sad is that none of this is hypermedia like. I kept hoping I could someday have some DOM surfaces I could put inside the 3d game.

Apple's been driving a "model" element for the web. Personally Im unthrilled that it's an object but bereft of an environment; there s no real space or navigability.


    We aren't even thinking along those lines these days. 
Some of us are. My side project is in many ways a 3D web; and i know I'm not alone in exploring the domain.


I would love to see what those UIs looked like, but I'm completely unfamiliar with it. What should I search on Google to get proper results?


I'm still happily using userChrome.css and userContent.css in Firefox to customize both its UI as well as a couple of sites I visit regularly. To do so, you need to enable toolkit.legacyUserProfileCustomizations.stylesheets in about:config.


I use an extension like Stylus to sync these across browsers and devices


Huh? Userscripts are dead according to author, but I apparently never got the notice because I have about a dozen of those things installed right now. Three of them are for YouTube because it's this much annoying as is.


They aren’t dead but with more and more build systems that obfuscate everything in the page, including CSS classes and IDs, it’s sometimes hard to do something really useful.


Would you mind giving me an example of something you've wanted to do that is really hard (or even nearly impossible) due to the these issues (so I have a standing concrete challenge case)?


As an example, I wanted to create a custom, more compact style sheet for JIRA (I don’t understand all those products with complex interfaces AND white space everywhere). I failed. It’s a PITA to write any selector and it wouldn’t survive the next build from the devs.


You don't have to select things based on CSS selectors. If you something really resilient, you select something based on the content of the elements (a button that says "Send" will rarely change to say something else, while the styling/DOM can change a lot). Otherwise xpath is another option of the DOM doesn't change a lot while CSS class/ID attributes changes between builds.


I have another fun challenge, try to completely disable the auto-zoom-feature on Google Maps. Last time I tried, I stepped through 20 stacks of obfuscated nonsense before eventually giving up.

But tbh I can't really imagine any scenario that is near impossible - except for WASM driven web pages which will break just about anything :(


why would WASM driven pages be any different to deal with than non-WASM driven pages?


You can't scrape HTML out of a rendered bitmap. It's not the only mode WASM can be used, but often all the rendering happens in the compiled code and the page only receives a series of images to display on a canvas. It's Flash all over again.


> It's Flash all over again.

Flash was at least easy to decompile. The decompiled code was so good most of the time that you could compile it back into a working swf, with modifications if you want any.

But then both WASM and Flash have to get the data from somewhere. That is usually something resembling an API endpoint that returns the result in XML or JSON. Why not sidestep the whole client app thing and go straight for that API endpoint?


Almost nobody is doing this, and almost nobody ever will because it breaks accessibility and a whole host of other features. I would save the alarmism for another day.


You're right about the accessibility and other features breaking. You're wrong in saying that (almost) nobody would do this. What (almost) nobody will do is creating an HTML-based rendering engine for windowing (GTK, Qt, wx, etc.) and graphical (SDL, DirectX, OpenGL, etc.) toolkits. It's technically possible, but maintainers have enough work with supporting just the 3 desktop and 2 mobile platforms. In the absence of a proper HTML rendering frontend, there's really nothing you can do other than compile the whole graphical stack and render into a canvas. That's the default for graphical apps under Emscripten - all games and GUI apps currently work that way. The only apps that have a HTML rendering frontend are CLI tools and REPLs, where it's trivial enough to rewrite the rendering in JS. There are frameworks currently being developed that allow rendering of widget trees in HTML or SVG, but they're still immature and not widely used. Well, WASM itself is not widely used, so it's normal.

BTW: were you around in the days of Flash? Because it broke accessibility and a whole host of other features the same way, yet it was still used and hugely popular (there was also ActiveX, which worsened things even more). It began with online games and graphics-rich pages, but it evolved frameworks for generic apps (ie. Flex) and then even CMS engines. Soon enough Flash was used for even blogs and e-commerce. The situation today is different due to the Web having evolved into a more capable and standardized platform, but there's no shortage of devs who would welcome the ability to run their app in the browser, but who dislike HTML/JS so much they would do basically anything just to not have to touch it - accessibility and a lot of other things be damned.

Don't underestimate how irrational people and even whole communities can be.

As a bonus, here's an example of the "almost" part: http://35.158.218.205/experiments/webDOOM/


Anatine[0] was a Twitter client maintained by Sindre Sorhus, a popular open-source maintainer. It made use of a userscript to add extra functionality to Twitter. Sadly, this meant keeping up with Twitter's ever-changing DOM class names.

It was eventually abandoned.

[0]: https://github.com/sindresorhus/anatine


Facebook and Instagram ads. Impossible to block through CSS now as the CSS classes are dynamically generated + it's even hard to pick out sponsored posts using .js too.


I am pretty sure that there is a team of engineers whose job is to office obfuscate ads on Instagram and Facebook.


Surely these people can find something more beneficial to society to work on, such as selling heroin to middle schoolers.


Money is money. Few of the jobs I’ve worked have had a positive impact on society.


From js, simply find all <a> tags that have a url containing "utm_medium=paid". Once you get them, for each tag element go up parentElements until the parent is <div role="feed">. Then remove the child.


Just block the entire feed and go directly to the profile pages of users you want updates from.


ca. 2014 I was able to write a bookmarklet that scraped a twitter thread by accessing the DOM (even causing page navigation) and rendering it as a reply-tree.

Two things make this vastly more difficult now:

1. Twitter uses obfuscated css classes that appear to change on every deploy, which makes scraping hard.

2. You used to be able to get around the limit on bookmarklet code size by adding a script tag to the page and loading remote javascript. Now browsers respect HTTP headers that sites can use so that only scripts from whitelisted sites are loaded.


Do older tweets get these new css classes when viewed after a deploy?

I could see using a set of standard tweets you created to sort of benchmark the change.


I imagine they would. That would be a good approach for data scraping, but difficult to do in a bookmarklet.


Try it on a Tailwind & React based site. Lucky for a site like that it used NextJS so it has a lot of JSON state dumped in plaintext.


I am asking for a concrete task challenge, not a kind of site or technology.


Ok scrape data from manifold.markets


Does it change a lot? Because it's pretty easy as is... eg.

% curl -s https://manifold.markets | htmlq '#__next > div > main > div.items-center.flex.flex-col > div > div.gap-8.flex.flex-col > ul > div > div > div > div.relative.flex-1.gap-3.pr-1.flex.flex-col > div.peer.absolute.-left-6.-top-4.-bottom-4.right-0.z-10 > a'

will get you all the anchors for the stories on the home page, and from there it's easy to get any other part of the stories.


It changes quite often. Since tailwind is utility based and not semantic it will likely break within the week.


Probably Tailwindcss.com, then?


No, like "the website X is heavily obfuscated but I wish I had a userscript that could make Y modification". I work on runtime code modification tech, and am mostly known for my work on iOS and jailbreaking. Someone saying "there exist apps that use this weird compiler that are difficult to patch" isn't useful: I need a specific app and a specific target like "the Orchid app is using Flutter, which has its own weird Dart to native compilation step, which is making it almost impossible for me to use the tools we already have available to add a button that automatically builds multi-hop accounts".


Trying to block Google’s “people also search for” div (when you navigate back to the search results page) is not easy IIRC


You can try these uBlock filters. Difference is night and day

    www.google.com##div[role="heading"]:has-text(/^People also ask$/):upward(5)
    www.google.com##span:has-text(/^People also ask$/):upward(5)

    www.google.com##div[role="heading"]:has-text(/^Cast$/):upward(5)
    www.google.com##span:has-text(/^Cast$/):upward(5)

    www.google.com##h3:has-text(/^Videos/):upward(5)
    www.google.com##h3:has-text(/^Images/):upward(5)

    www.google.com##div[role="heading"]:has-text(/^People/):upward(5)
    www.google.com##span:has-text(/^People/):upward(5)
    www.google.com##h4:has-text(/^People/):upward(5)

    www.google.com##span:has-text(Related searches):upward(5)

    www.google.com##span:has-text(/^Latest/):upward(4)

    www.google.com##h3:has-text(/^Top stories/):upward(6)

    www.google.com##div:has-text(/^Trailers/):upward(4)

    www.google.com##g-scrolling-carousel


This thing drives me nuts! It’s always popping up JUST at the right time to catch my click when I’m aiming for the second search result. It’s been that way for long enough now that I’ve started to assume it’s like that on purpose, just to keep you from leaving.


Override the font on all YouTube text elements.

You'd probably think it'd be simple, right? This being the kind of thing that Cascading Style Sheets were designed to handle elegantly and gracefully?

https://maya.land/user-styles/youtube/

I always thought people on here were just cranks complaining about style bloat, because a lot of the "this could have been a text file" arguments were applied to things that were genuinely better with the presentation polish. But this? How can developer ergonomics be worth this?

Anyway, user styles and user scripts and bookmarklets are all great and I use them all the time. This level of Big Tech deformation of the basic web technologies makes them more annoying, but still not impossible.


There were these things called API spies back in the VB6 days that would figure out the appropriate way to refer to whatever you clicked. The one from PatorJK would even produce code to do things to it from downloadable templates.

It's still around: https://patorjk.com/blog/software/

I wonder why no one ever made something like this for the web.


It is built into every browser's dev tools, but the point GP was making is that the way many websites are built now means that for an external observer, the way you refer to an element changes with every reload, or with every new build (which may happen at any time and frequently for actively developed sites).


What do your user scripts do if you don't me asking?

On a site note: Been looking for one to show reccomendations always based on simmiliar vids, not my recently watched and whatever the black box algorithm is offering me. Unfortunately I never figured out how do it properly, since the position of the element seems to be random and sometimes straight up missing.


The YouTube ones?

1. Prevents channel page video from auto-playing (not mine)

2. Removes the stupid custom scrollbar that takes up actual pixels on my screen at all times

3. Automatically sets highest available video quality by clicking through the quality menu so I don't have to

And those are in addition to SponsorBlock and uBlock Origin extensions.


I made one for youtube that moves the seek bar below the video, instead of on top of the video. In addition it removes the gradient etc. I've always hated how it covered up the content. (pure css, but still)


The one that switches off the comments section makes the platform quite a bit more pleasant


I use one to redirect youtube shorts pages to the regular youtube UI. Also one for old.reddit.


Many sites are turning on CSP, a web security capability which lets sites block loading content from anywhere they don't allowlist.

Userscripts will run, but if they try to load any content from elsewhere, that breaks. So, for example, the Hypothes.is extension wont load. Affected sites keep growing. https://twitter.com turned on CSP many years ago.


This is such a silly problem to have. User agents that want an exception to a policy are free to make one.


The mainstream browsers & spec authors sneer at & detest userscripts, wish they'd never been a thing. https://github.com/w3c/webappsec-csp/issues/444

I agree. This is simple & stupid a problem. It's grossly anti-user. But today, security rules, and security sees all the misdeeds of users, and those who would prey on them, and they have no emapathy left for creativity and fun. There's no moral-value-party left anywhere that defends user freedom, that is pro end-user-hacking.

The Manifest V3 (MV3) catastrophe is still scheduled for 4 months out, even though there is still only 1/3rd the work done thay it would take to allow GreaseMonkeyTamperMonkey/et-cetera to live. The new browsers are actively unwinding power & possibility, forever trying to undo the permissiveness that it had (ex: CORB, and now ORB). The standards authors & browser makers are infected with deep deep fear. Certainty & closedness is winning.


Keep in mind that all of the mainstream browsers are funded by companies that make money off of advertising, so you should always look for ulterior motives from them.


+1, I use somebody's Greasemonkey script to keyboard-navigate the Gmail simple HTML view. Very handy.


More and more sites use "Content Security Policy" and prevent javascript: from working.


On a related note, Chrome has removed its user stylesheet feature years ago. I've often asked myself why we then still need this "cascading" of style sheets. It seems to have degenerated into a feature for web developers who can't make up their minds, and pushing conflicting styles to every browser for late evaluation, negatively impacting energy consumption. Or maybe they want component-like behaviors from their third-party stuff they can then adapt and override, but at the same time the comfort of not having to do so when unnecessary? And they don't want preprocessing either; nope, it has all to be in the browser, with accessibility a fig leaf for this complexity. In other words, CSS was hijacked by web devs for their convenience, as can also be seen by the custom properties feature, the upcoming nesting feature, and so on.

Another way to say this is that we have web developers now when the original web was for web authors.


Isn't this just how technology progresses? Things start off as amateur pursuits but once a field becomes profiting making then professionals move in and industrialise things.


I think it was meant to be a reference to this bit in the original CSS spec:

"One of the fundamental features of CSS is that style sheets cascade; authors can attach a preferred style sheet, while the reader may have a personal style sheet to adjust for human or technological handicaps."

So CSS was originally intended for both users and web devs, but web devs "hijacked" it to be for them alone - ultimately, by removing support for user stylesheets in the dominant web browser.


Industrialization under no-competition doesn't lead to professionalization (if you associate more than just "being paid" with it). When there are no clear metrics of what's good or bad, you can live with that as an amateur and trust your common sense and taste. However, when you become professional, in the face of no clear metrics you have to invent some (or your manager will invent some just for you). These invented metrics are then gamed and gamed again, as it doesn't matter if you actually meet them or not, since they're meaningless to begin with.

By "no-competition" I mean for the professionals/employees. There's so much demand for programmers and so much money allocated for them that the limited supply makes it trivial and common to be a professional today with skills on the level of a script-kiddie from the '90s.


Maybe, but somehow Microsoft Office with its VBA macros has managed to remain sane in comparison ;)


This is a good response to the vast majority of curmudgeonly HN posts tbf


C'mon, let's not imply that web developers are, generally speaking, professional. ;-)


Idk "it is what it is" doesn't make an interesting conversation OTOH, does it?


I got my first dev job at a call center because bookmarklets, I was a phone operator and hacked some internal websites to improve the UX and the speed, everyone ended using these bookmarklets (coded in word and then minified running another script in the browser console).

Grease monkey is still alive, I added voice recording and speech to text transcription to Slack before they did.


My most used bookmarklets are two: both find the last number in the URI path and increment or decrement that number. Perfect for paged websites. I'm using these for almost twenty years. If sometimes Safari removes the function to activate Favorites via ⌘1 to ⌘9 my whole muscle memory will be screwed.

For almost twenty years I'm thinking of expanding these two bookmarklets to use the rel=next/previous link relations instead, but I don't have high hopes in the quality of the markup of the web anymore. Which is a small problem for the idea of a programmable web. The deeply nested div trees which often result of modern frontend development and weird CSS methodologies are as worse as the table layouts of yore.


>The security benefit to consumers of blocking user scripts is probably a net positive for the average internet user.

Strong disagree. Extensions and User scripts are an unmapped wild west.

Providing a registry type system to securely vend packages/scripts is a very hard security problem.

https://www.bleepingcomputer.com/news/security/mozilla-block...

https://iamakulov.com/notes/npm-malicious-packages/


Whether they're gone or not, I definitely want _more_ of these. I've had great success with shared Tampermonkey scripts at work, augmenting off-the-shelf tools my team uses with conveniences specific to our team. For example, automatically extracting relevant bits within noisy logs when browsing CI results, or providing buttons that kick off narrow re-runs or link to relevant source files. CSS injections are great too, a very lightweight way to improve the accessibility and usability of tools.


Userscripts are alive and well! I use Tampermonkey in conjunction with an Amazon seller extension (which I won't name because I'm not really using it as intended). The extension displays, among other things, a flag icon next to a product listing indicating the seller's country. My script hides the extension's entire UI and replaces it with a country filter. It can be really useful to filter down to US sellers only, and only see brands that are well established here


Kind of reminds of the days of Yahoo! Toolbar and friends. I think today we have several ways we can go about this. There are browser extensions that can be used to both complement and alter the behavior of a website. Extensions can interact with the OS and other apps on a system which offers a lot of potential for integrations and customization. (e.g., on iOS/macOS, Shortcuts allow integrations with 3rd party apps, AppleScript, PowerShell (on Windows), etc.)


How did these user scripts work? I don't get how they could be "blocked" now. I've made my own snippets of JS to run on certain sites before.


Sites can impose a Content Security Policy, that (among other things) can disable inline JavaScript. This makes it a real PITA to modify web sites with user CSS/JS.

I am personally aligned with pro-CSP because it can greatly reduce the attack surface of web sites I host, and is quite effective and precise.

Browser extensions can, and do, play around CSP.

Bookmarklets has no overhead in the browser, because they are just bookmarks that do not interact with the browser unless I click it. Having a little addon for each little functionality is annoying, slow, and difficult to maintain and review.


Why do browsers enforce CSP against bookmarklets and user scripts, though?


It looks like Mozilla fixed it for bookmarklets[1] three years ago in Firefox 69.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1478037


Naive users can be convinced to run random code.

Go to discord.com (no need for account) and open browser console.

You will see

> Hold Up!

> If someone told you to copy/paste something here you have an 11/10 chance you're being scammed.

> Pasting anything in here could give attackers access to your Discord account.

presumably they added it after it kept happening. And likely the same things happens when random user can somehow run code they got from scammer :(


I can see something like that as a separate red box above the actual console in opera and firefox, regardless of the current site. Annoys me as hell, cause there is no way(?) to turn it off.


Bookmarklets & user scripts evaluate in the page's context.

CSP literally says, only talks to these specific domains. https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP#exampl...

The browser doesnt block thr bookmarklet from running (iirc). But most bookmarklets immediately try to download & run some libraries to do their thing. Or they try to send data somewhere. And CSP is blocking those connections.

Its hard form me to guess how much of this is intentional anti-user lock out, and how much is just oversight or technical diffifulty. I could see not wanting to drill backdoors through your own security policy, which is more or less what it woild take, but it sure feels lile a loss. Users can turn off csp for the browser with a flag if they want, but we can only do that to the whole browser, not site by site. Also worth pointing out that just disabling csp for thr browser/site (rather than carving exceptions for userscripts/bookmarklets) is something sites can detect (by succeeding in a request that ought not go through) & could potentially decide not to serve you content or what not if they wanted to be petty. That said, extensions exist & arent readily detectable so there's options... they are just nowhere near as direct to author & use.


I found the bug I opened against the csp spec, asking for it to not obstruct userscripts. https://github.com/w3c/webappsec-csp/issues/444


The goal is to prevent any way for attackers to inject code.

Bookmarklets and user scripts are collateral damage.


That's probably the right choice, but it sounds like there should be a "captcha" to separate programmers from non-programmers.

The browser remains in a locked down kiddie mode until you solve these riddles three!


I don't see how this could be truly secure if it's JS running on the client. There is nothing stopping a user from running a custom version of Chromium or otherwise that ignores CSPs... Maybe I'm not fully understanding what is being restricted here and where the code is being run.


I guess it's reducing the attack surface for your users, as you can't have a malicious userscript that would log your cc number or something


It was a little frustrating the last time I wrote a Tampermonkey script, mostly because React is being used on the site and I had to find some algorithms to traverse the state being stored in the React Fiber required some DOM traversal and knowing about the inner workings of React to figure out things like the Download URLs I needed for my script to work. Basically I had a list of files that I own on their site and wanted a "download all" button, so I made one.

It added a substantial amount of complexity but it wasn't insurmountable. The scripts he talks about sound like there's stuff I'd have to sub in a shell script or use a two stage process to do, as there is some post-download processing I'd like to do automatically, but there's no way for me to access my OS from the sandbox of Firefox (I mean, for good reason, although it would be nice if I could override that in some cases)


I love the programmable web!

I wanna give a shoutout to Stylus[0] for offering userstyles (basically the same thing as TamperMonkey but for .css files)

[0] https://addons.mozilla.org/en-US/firefox/addon/styl-us/


Maybe because we have an extension for almost everything. If not enough, tempermonkey still works for nearly everything I am doing. I also use stylebot here and there for visuals and block elements with adblock. I am not sure if I understood the author's problem.


> I made it possible to generate a new document by loading, e.g. javascript:'hello, world'

It never occurred to me to try, but I think this is still possible even without the javascript: URI scheme, as there’d be nothing stopping you from putting an inline <script> in an HTML page described by a data: URI, and bookmarking said data: URI.

Not sure what the browser would consider that data: page’s HTTP origin to be for cookie / CORS purposes, though.


Data URLs have a null origin and don't count as secure.


this has nothing to do with security and everything to do with trying to charge $4.99 a month for everything to the maximum extent permissible under law.


IMHO, we need a browser optimized for power or professional user experience (PPUX), instead of browsers optimized for dumb users (DUX).



Looks nice, thank you !

Tree-based history is smart, so smart that I don’t understand why no « mainstream » browser implemented it yet.


Nice! I wish there was one for Android too…


I think Vivaldi comes the closest, if you're okay with a non-FOSS browser.


That was Opera browser up to version 12 and its spiritual successors, notably Otter browser.

https://news.ycombinator.com/item?id=27695463 (2021-07, 96 comments)

https://news.ycombinator.com/item?id=18830430 (2019-01, 47 comments)


Sometimes I use curl with a session cookie copied from the developers tools. I never automated that part. I don't even know if it is possible.


We have a lot of those, but they're not as popular of course.

Qutebrowser is my favourite among them.


Is it because in the beginning of the web the percent of tech users was much higher than now and not revenue optimized?


Partly, but also because engineers had more powers.

Now decisions are taken by managers or PO who don't care (or even know) about programmability, open standards, etc.


Writing userscripts and CSS filters becomes more tedious each few years, especially mucking with dynamic DOM and complex frameworks websites tend to use now. WASM and obfuscated/minified javascript made "programmable web" impossible in practice.


Agreed, except MutationObserver has made dealing with DOM modifications much easier.


I used to write a lot of small user scripts because they allow me to easily add functionalities to a certain website without worrying about UI and shit like when building a browser extension.


I think this is one of the highest aspirations of web3. API's baked in so deep that it's literally part of the protocol. Rate limited by price.


I remember the goold ole' days, when XUL based Firefox could be programmed to act like a Shell to the WWW.


I used hate ORMs and a few weeks ago i tried one just to improve my productivity and guess what i had to spend 2 days to figure out the problem that crashing my MySQL server. It turned out to be a ton prepared statements by the ORM.

Simple web is better.


I don't miss them very much. The whole thing was a security nightmare and I am glad Mozilla dropped XUL. I think they truly had to.

I get the utility but I suspect writing greasemonkey scripts for SPAs might not be a great experience.


That’s not a knock against Greasemonkey…


I miss the noscript/basic (x)html web.


I miss the "immediate web". Being able to work on simple things without feeling like they are useless, pointless or perceptibly outdated.

Now it's install node, install git, install npm, install a gazillion packages, run localhost then hope everything works. Build command, clean command, eject commands, deploy command, pre processors, post processers, documentations full of so much jargon that i feel burnt out looking at it, hydration rehydration, pre-rendering, headless cms....on and on.

I find myself reverting to writing simple bash scripts, userscripts - they are more enduring, not built on a stack of fickle dependencies, fads and ever increasing complexity.


If you build web pages without the whole JS frameworks and Node circus, they are pretty durable. Those helpers were invented because people started to build empires (complex applications) on sand (the web), so they are pretty useful for larger projects, but simple websites can still be easily built with plain HTML/CSS/JS.


yeah but there is no value, personally in building projects like that anymore, you can not use them in your portfolio or to progress your career - i was going to build a personal site, but realised it would probably act as more of a hindrance to me, because potential clients/employers would see that it was not using latest new js fad.

if it was up to me i would never touch those things, but that's what the market has become.

i even resisted learning typescript for so many years, because i loathe the idea of learning a language that needs to be converted to have correct syntax. same with jsx, i use it, but it feels horrible.


People building for the fun of it with simple tools is what made the old school web that so many HN readers are nostalgic for.

These days it feels like everyone is constantly looking for an angle, feeling like nothing is worth doing if it can't be monetized and turned into content. Fuck's sake, people can't even go _fishing_ without strapping up with a GoPro and trying to sell their favorite tackle.

Be punk rock. Go do something for the joy of it. Go learn a language that no one will ever pay you to write. You have a day job to get paid. Why worry about it for what you do for fun?


> Be punk rock. Go do something for the joy of it. Go learn a language that no one will ever pay you to write. You have a day job to get paid. Why worry about it for what you do for fun?

you're right, i'm trying to get into that mindset. Being contrary for no other reason than absurdity.


You think potential clients are digging in to the source code of your hypothetical personal website? You sound like you've identified a group of technologies as a personal enemy, and already don't use or want to use them. The most value you can deliver to a potential client when they visit your website, is in communicating concisely what you have and are capable of buiding for other people like them, not what you're capable of building for yourself as a vanity project


Interviewed and hired many developers. Not once was the tech stack of their personal website, if they had one, ever brought up or considered.


i just will assume most hiring gatekeepers will make superficial judgements about the appearance of your portfolio work and many developers will do the same about your code.

not everyone is willing to give the benefit of the doubt.


In most companies I worked for, there is no portfolio to show unless you want to violate the NDAs.


I poke around code that candidates have written. Simplicity is a virtue.


I’d say the opposite in my experience when I’ve been a hiring manager.

I prefer candidates who show that they understand their fundamentals, their primary domain and can show that they grow and adapt easily. The current hot technology is ephemeral and can be learned on the job by a competent engineer.

That said, my background is startups so I don’t know what larger companies expect.


My background is in larger companies. No one is going to care what framework your personal site is built on. If anything it might not even come up because there’s such an emphasis on Leetcode questions, but I seriously doubt it would hurt your chances.


If the only reason you build websites is to get paid then you're part of the problem. Separate your personal and professional lives. Or do you not enjoy building on the web for fun?


> If the only reason you build websites is to get paid then you're part of the problem.

This is absurd. I write code for a living, I enjoy solving problems and that is part of my job.

I do not write code in my free time. I play video games, go hiking or go to the beach and maybe read a book with dragons in.

That doesn't make me a problem. My job is not my life. Honestly it shouldn't be, it is not healthy imo


You can code in your free time without your job being your life, as you put it. For example, your hobby project might have nothing to do with your day job and just be something you want to make or that gives you pleasure to tinker with. If you don't really enjoy doing it of if there are other things you'd rather do then you don't have to do it, obviously, but this idea that you can either never code outside work hours or have an unhealthy relationship with your job is a false dichotomy.


You're competing against people who do. "Should" it be that way? It's not the employer's fault some prefer to spend their free time programming too ;)


i've been doing this since 2007 as a full-time career. I think my interest peaked around the ajax/jquery era and has been in steady decline ever since. i'm just bitter about becoming a web dinosaur.


I would go so far as to say javascript hurts the web. If hacker news was written like facebook the user numbers wouldn't be the same. Endless transitions, spas, waiting for loading, glitches, popups with fades, all that is a waste of time - the user just wants to read text. I want clean loading, just simple clean html, fast rendering, nothing processed after i get the html rendered. What would be rad is a facebook.clean where you can just HN style rendering, clean tables, the browser keeps track of what you clicked, super fast interactions etc.


FOMO is a bad career advisor. It puts you in direct competition with all the other people anxiously doing only what they think everyone wants them to do. Sort of hard to distinguish yourself if you are doing exactly the same super trendy things everyone else is doing.


I strongly disagree that it has no value.

If I interviewed a developer and they showed me an impressive website/application that was built without a framework, I would probably be more impressed than if they showed me one that was built with one. An understanding of the basic building blocks of the platform is always helpful, just like I would consider knowledge of lower-level programming and different programming paradigms a huge plus, compared to someone who's only ever done web.

I would, of course, also be sure to ask them about why starting from Adam & Eve is only rarely appropriate in a business context, how it's important to write code that others can pick up and understand. I expect anyone but the most junior candidate to understand that no tech stack is perfect, and that it's all about trade-offs, and that the tech is a means to an end. We don't hire people to love a tech stack (most of them are pretty unlovable, to be frank), we hire them to, as quickly as possible, get good software in front of paying users.

If you come in and demonstrate that you can take a thorough understanding of HTML, Javascript and CSS up to something resembling a dynamic, modern, responsive web application, and if we can have an intelligent discussion about what the frameworks and infrastructure bring to the table, you're obviously miles ahead of someone whose knowledge stops at copy-pasting React snippets and googling the WebPack error.


When doing consulting, many costumers only care they get a Website, not how it works underneath.


It's OK to do a thing because you wish to, and not because it might advance your career one day.


Amazon and Google frontends work without Js. The same can be applied to other empires.


The more I see the modern web, the more I appreciate PHP (LAMP stack or similar). On the client side, with IE6 more or less gone, most of it can be done with vanilla JS, no need for JQuery anymore.

If what you are doing is just another CRUD-based website, there is real need for more, PHP is designed to make that simple. I had a well deserved reputation for poor security, but they got better, and 90% of it is addressed by just using parameterized SQL queries.


You can still drag and drop a folder to the Netlify UI and it's published immediately. How you produce the static assets is up to you.

Of course, having to work with multiple devs on the same project adds lots of complexity to minimize risks and conflicts.


> having to work with multiple devs on the same project adds lots of complexity to minimize risks and conflicts.

That's when you link a GitHub repo to Netlify.


I miss view-source.

It used to be you could open up your dev tools and figure out how things were made. And you could learn all sorts of things about architecture and style and techniques this way.

Now its all buried in build processes and obfuscation techniques, as if your button handler is a tightly kept secret that's somehow protected by scrambling your variable names.

Although the other day I came across a complex app that was built in modern vanilla JS with native ES6 imports alongside regular old CSS. It was a work of art.


I got so tired repeating this and trying to convince others that just I created my own agency. Clients are happy, I’m happier, so I guess we are not totally crazy.


I remember the first time I heard the word "tree shaking" for CSS.

...I genuinely wondered what tree was involved. Took a Google search before I understood that it was industry jargon, which meant this: to reduce/minify code.

There are many similarly opaque things in the world of web-development.


Tree shaking is not a web-development thing. Wikipedia says the term originally came from Lisp https://en.wikipedia.org/wiki/Tree_shaking

Every industry has things that have names that you might need to learn.


Tree shaking doesn’t mean to minify the code but remove the CSS classes from the payload which are not being used anywhere. Proper term is dead code elimination (DCE).


To help clarify for the person you're replying to; minification could be considered to only mean: taking an interpreted language like JS, and essentially compressing the text of the code by doing transformations to it that don't change the semantic meaning of the code. Swapping out long variable/function names with short, computer-generated single(ish)-letter names. Removing whitespace.

Much of this was because real compression methods like zip simply weren't available during early decades of web development.

Obviously, if shrinking the code size is desirable, and you were building a library that did minification, you'd want to move on to also adding other, more dangerous changes that could change the meaning of the code, like removing unused CSS classes, doing DCE on javascript, etc, etc.

--

Since programming is a giant, decentralized soup of autodidacts, there's no "word of god" authority that can really say "this is the one true name of something". It's mostly just lingo that passes in and out of various communities, and a lot of times communities (as you're seeing in splintercell's comment) try to helpfully match themselves up and standardize so it doesn't degenerate into pure chaos.


Why is "dead code elimination" more proper than tree shaking? Is CSS even "code"?


Dead Code Elimination is the goal. Tree Shaking is just one technique to achieve that goal.


Imagine you have a root (let's say your index webpage). Then you can build a tree of what is used by what. So if you have a CSS classname that is used by C (C being an HTML element, component, whatever) and C is used by B, but B is not used by A (the root) then it's shaken (removed) and "falls" because it's not connected to the root.

Basically remove everything not connected to the root.


If you're telling a story about how you learned something, it implies that you have learned it.


Don't forget containerization: instead of just downloading sources, build them and run you need to think all the time outside the core problem.

I likte to think about the the term development UX in contrast of end-user/mass UXs.


You can choose not to use most of that stuff. I recently moved some stuff over to esbuild and that's been pretty helpful for that. Jest is a big and heavy thing but I've found it's the test framework that gives the least headaches without really needing to be configured. And of course deploying can be a one-line "run esbuild with the output going here". Or even check in the artifacts!

You lose the nice stuff like livereload but you can hack in something to that effect if you want to.


The immediate is how I code in private Web projects to this day, at work is another matter.

If I had a vote on the matter, it would still be fully SSR with a little bit of vanilaJS when need.


For frontend stuff, moving away from the JavaScript ecosystem to C# with blazor has really improved my motivation.


At the expense of unrealistic download size.


> not built on a stack of fickle dependencies, fads and ever increasing complexity.

Every dependency is a benefit and a liability. It's been interesting to watch this ecosystem grow over the past decade as we collectively explore how much liability we're willing to accept.


This was why I created https://webcode.run the elimination of all tooling and a fast development loop even for backend


I hate all that nonsense, so I don't use it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: