I agree with the author. Is it highly unlikely, sure, but it's important to create awareness of the risks associated. At the end of the day, you are sending potentially sensitive data to a third party.
Good security posture is all about building habits and I personally don't want myself or my team being comfortable with the idea of pasting code or JSON config files into a third party system.
If any of these online tools are sending your data to the server, don't use them. You don't know what happens with your data once you send it and accidents happen even if the service has your best interests in mind.
For the ones that are client side, such as JSON-to-go. You can save the client side code locally, set a bookmark, and use your local version instead.
* Is it highly unlikely, sure, but it's important to create awareness of the risks associated. At the end of the day, you are sending potentially sensitive data to a third party.*
I don't think it is highly unlikely. I think it is highly likely that if you make a habit of using these tools one of them will eventually be compromised. Either through a technical hack, financial pressure, purchase by an immoral entity, or a disgruntled employee somewhere along the path.
Then again if it's just for testing/learning, and the data isn't really sensitive who cares, use what's easiest. Most of the time the easiest for me is jupyter so I can test how it actually works, and when I'm finished I have working code.
The likelihood of compromise depends on data being sent + chance of said service being malicious, which is why I said highly unlikely. Even if I gave you a JWT or config file, you'd still have to know how and where to use it. Sometimes this can be obvious, sometimes even if you know how, you can't access the where like if the credentials contained access to a db local to only my machine or to a server behind a firewall.
This topic is much broader than just online JSON tools. There are all kinds of converters, transformers, and linting tools available online for many languages and frameworks that you shouldn't be sending your private code to.
I disagree with the author on always running it locally for yourself. If a service is useful enough, you should set it up internally so your team has a sandbox to use it. Spread the knowledge instead of hoarding it. Compiler Explorer is an example here.
I keep a copy of CyberChef [0] locally. Can do the majority of the data manipulation I need. Does JWT Decoding / Signing / Verification and JSON Validation / Pretty as well. You can experiment with insignificant data here [1].
For those criticizing the author for 'fantasy' security problems, it seems relevant to emphasize that they work at a bank---their threat model is probably rather more vigorous than most.
Security is all about habits, using such tools make you train bad habits.
Sure jwt.io should be fine, but what about the dependencies they use to build it how through are they checked. What about domain hijacking, https downgrade attacks and similar. Etc.
It's probably still all fine for jwt.io they probably use certificate pinning and similar.
If you want to know how tricky attacks can become just look into the etherum dark forest article. Which had been posted here a few days ago.
It's not a question if such attacks (based on undermining widely used web tools) happen it's just about when and how big the fallout will be.
If you don't trust your team not to paste privileged production tokens to third party services, security training might be a better course of action than defining vague rules.
The file format is archaic, but not objectively worse than any alternative (modern or contemporary). It helps to think of an ACH file as an expression of a line protocol, with field framing and offsets and some internal checksumming.
ACH files are mostly human-readable in their raw form (some simple vim highlighting goes a long way), and that was surely a design goal. It is terse (important for 1960's era data transfer/storage costs) which makes it information-dense, which remains a feature today.
As a system, the ACH Network is incredibly reliable and secure. The security is built into the system, not into the file format. Only trusted players are invited to participate, and the threat of removal is much greater than any enticement to deceive. Furthermore, it is a full-recourse system. Errors can be backed out after the fact.
File delivery is secured in the usual way. Preshared keys, SSH/SSL, etc. ACH is more secure than your bank/broker website, or any ecommerce transactions.
I would like to see encrypted (or at least cryptographically-signed) ACH files, and I wish ACH was used in support of something quicker than a 2x/day (business days only) batch settlement cycle...but that's an interbank/Fed issue.
Yes it's sometimes rediculus with what regulated businesses can get away with as long as it's either historical or had been certified to be secure at some point in time.
The alternative is oftentimes doing nothing and putting people out of work. You shouldn't proactively punish people for the potential actions of other unrelated people who might choose to break the law.
And a good one too. I'm currently maintaining https://0bin.net, and because we encrypt everything client side, people feel like they can post anything they want. We get some pretty personnal stuff.
They really should not. It's a can of worms. We can get compromised. Bought. Receive a court order (we comply with dmca). Or they could be on the wrong URL (typo squatting, phishing...).
Don't trust random online services with your data. FOSS or not, the code we serve can only be trusted as far as you and I we can be. And you don't know us. And you will make mistakes.
Now I'm guilty of it too, I share passwords with 0bin sometimes. But at least it's my service, I can assess the level of threat.
People post the full links (including the key) everywhere, so regularly, we google a bit to check what people use us for.
This allowed us to discover we were pretty popular on the crypto community and in specific fan fic subreddits. Which lead us to implement btc tipping and reader mode.
We also got reports, tickets, dmca, etc.
We cannot brute force our thousands of paste encrypted payloads, but for a sample, it's easy to follow the bread crumbs. And if we do it, others do it too.
>The goal of 0bin is not to protect the user and their data (including, obviously, their secrets).
>Instead, it aims to protect the host from being sued for the content users pasted on the pastebin. The idea is that you cannot require somebody to moderate something they cannot read - as such, the host is granted plausible deniability.
The titles aren't encrypted. Perhaps people are putting personal data in the titles of their posts, or hinting at personal data in the encrypted portion? Which is still a problem since the code served by the site has access to the plaintext, even if it's not normally sent back to the server. It would be trivial to change the code to send the plaintext or encryption key to the server, or just weaken the encryption somehow. Even if you trust the site operators they could be ordered to implement a change like that with an NSL, and prohibited from talking about it.
As they say in the FAQ, the encryption is there to provide plausible deniability for the operator of the site, not to protect the users' data.
Titles are rarely used, but I did receive emails of users saying "woops, can you delete this ?" with very personnal content.
Which is why we implemented the delete feature (creating a paste gives you a cookie that allows deletion) recently because we don't want to spend time on customer service for a free site.
Super easy to set up with Vault, it just hooks into the cubbyhole engine. The one-time-token is also the decryption key for the data store. I find it great for myself and the occasional email but also wouldn't really want to have others use it too much.
Up to now we just delete it. We coded an admin for that. It takes time because we have to read the claim and assess the legitimity of it, not for the deletion.
Assessing said legitimity is tricky. Not because people lie, up to now they have been pretty decent, but because you really don't want to be tracked while following potentially child pron links, so it may take time.
Scaling moderation could become a problem if 0bin becomes more mainstream, but with so few reports, it's not a problem right now.
It’s damn near impossible to even get the government to setup a locked down file sharing folder for an active lawsuit using a platform like Box or OpenText that they already have in place unless someone relatively high up is committed to pushing it through IT. The justice system does not mess around. If you try to send, say, a casual Dropbox link to a DOJ employee, they typically aren’t allowed to even click on it.
I came there believing "who cares" really but anytime there's money involved you step into the piranha bay. I heard even lawyers had to be followed around in the archive rooms.. some would snatch documents, or sniff whatever info they could to help their case.
So yeah they have to be careful because people are trying to tip them over regularly.
That said I naively plugged my phone on day 2 and their setup gladly accepted my device as a mtp mount (even loaded some drivers on the way).
I had access to stuff I shouldn't..
you get a weird feeling of paranoia yet nothing that solid. It's a big mass with official titles of people trying to be serious.
I dunno, I worked at a bank that was about to be a Capital One direct competitor and I we weren't really worried about "leaking implementation details" via the structure of our tokens. Sounds like some security through obscurity.
Not leaking tokens/token structure !== security through obscurity.
I have seen idiotic implementations of JWT which effectively leak session details that should only be kept server-side because "it was just easy to validate it on the client-end of things"... this specific example is an extreme one from my career history, and was caught long before it ever made it to prod.
But.
In this engineer's "hello world"-level implementation of JWT they effectively overloaded the session with incredibly sensitive data that should have been server-side-only (later to back-pedal and say this was a "dev only" implementation!). They did use JWT.io to debug their "implementation", and having done that did leak non-trivial details to a third party about how our authentication system was built.
This guy was a "junior engineer" who was hired because of nepotism - making this an even more extreme case... but really it's not. I've worked with incredibly ignorant, careless, not trustworthy, desperate, etc. people through my entire career (admittedly being those things myself sometimes).
Anywho - not leaking implementation details, and potential software/infrastructure secrets is not security through obscurity.
Attackers but knowing details can make attacks much harder especially if they attack things where they don't have any direct feedback it also increases the lightly hood of attacks been identified as such by monitoring before they succeed.
Sure you MUST NEVER rely on obscurity for security and any form of obscurity for security which increases complexity is increasing the potential error/bug surface and as such a bad idea. But not freely giving out how exactly your bank works internally is still a very sane/usefull idea.
(It's not the same as e.g. not giving out how a tool a lot of external people use directly works, it's about internal only workings which you don't give out.)
I think the author is over dramatic. Like when he says this:
>I've been burned a number of times by folks putting a Non-Production JWT or an Open Banking Sandbox certificate into jwt.io.
He hasn't been "burned" by that at all. No security breach occured because of that. He does have somewhat of a point, but he goes off into fantasy land trying to justify it.
> these are sensitive in of themselves, as they have implementation details for our services, and as mentioned, certain things could be used outside of Capital One.
I imagine these JWTs will find their way into a frontend application in prod (because what else would they be for?), at which point any actual user of theirs could pull the token down and get access to these implementation details. The only thing sensitive about a JWT should be its ability to authenticate a user; encoding actually sensitive data inside that token, such as anything you want kept secret, would be a huge mistake.
I thought the entire point of signing a JWT was because you need to validate it because there is some endpoint that is untrusted and you have to treat the claims as potentially compromised.
I may have a backend service that is both internally and externally exposed, necessitating all requests to it be signed. I may have a backend service that has load limits that need to be fairly adhered to, and to do that I allow users only so many requests per time period. Even without load limits, I may want to know who is calling me, and forcing services to first request a token on behalf of themselves makes it more likely that each service will uniquely identify itself.
Really, if you trust the callers of your API enough to allow unsigned JWTs, you probably should trust them enough to not require anything (because copy/paste mistakes alone mean the data isn't valid, intention aside). If you can think of a reason why not having any identity information at all is problematic, you should probably force a signed JWT. The extra effort is negligible, it reduces potential risk if the use of the service changes and it starts getting 'untrusted' users, and it helps reduce "oops" mistakes upfront.
Sure they don't need a password database on the side of the receiver.
But it's best to treat them like temporary password + some arbitrary metadata.
So no, they don't protect you from corrupted clients, they just limit the damage in what can happen and when it can happen but the corrupt client can still do all kinds of bad things.
Edit: I need a different Android keyboard, this is driving me nuts.
It is rather common practice to encrypt the JWT that is presented to the browser which uses it as an opaque value. Pasting a decrypted token on a public site is then definitely a form of information disclosure. Whether it is exploitable or not is a different question.
You can put claims (and other metadata) in their which you don't want the client to see.
Like you might not want to expose how exactly your internal permission system works to the client to make it harder to find places what did to bad configuration your JWT can be (ab)used to do thinks it shouldn't be able to. (I.e making security bugs harder to find from the outside).
Or it could expose internal IDs which might have some privacy concerns.
Or the internal IDs are not as security random and you might maybe be able to use that in some way to have a very round about kind of Oracle like attack.
Or this internal data could be used to make social engineering attacks easier.
....
Edit: or you encapsulate another internal only access token in there which the gateway server your website communicates with used to access the resources you are allowed to access and you want to make sure that a attacker who got access to the internal network somehow doesn't has access to any internal tokens without hijacking a service which has such tokens.
If you never use production data for anything other than production this stops being a problem. You can put all the dev and staging JWTs you want in to jwt.io at no risk if those things aren't available to the outside world.
Agreed. This pretty much applies with any data & tool. If the data is extra sensitive, make extra sure the tool you are using is secure. If your data is for dev purposes only, the tool doesn't have to be validated as thoroughly.
That is a good point to make. If the development data can lead to exfiltration of higher privacy data then I would define it as "more sensitive" and take that into consideration, this particularly applies to config information required for authentication. I understand your perspective however, it is important to take lateral movement into account.
Except, habits some catastrophic failure coming together.
The later one makes you need to access production data even through you don't want to the former one makes you leak it in the hurry to fix that catastrophic failure.
For JWTs, I agree with this stance, since they are security credentials and therefore basically all of them are sensitive information.
I don't discourage online tooling in general. It's a risk/benefit trade-off---No, you shouldn't paste sensitive information into websites run by other people in general, but for non-sensive information where you don't care if the online tool is logging it or not, go for it. There are significant advantages to not having to roll your own for every single thing you need to do.
Carried to its extreme conclusion, "Don't use online tooling" implies "Don't read jvt.me," because who knows what that website is doing while vending you blog posts? clearly, you should roll your own solution by maintaining your own private collection of knowledge that you never share with anyone else. ;)
>For JWTs, I agree with this stance, since they are security credentials and therefore basically all of them are sensitive information.
As long as it's prod env and your expiration time is somewhat reasonable, then I don't think it is sensitive at all unless you're storing an actual sensitive informations in them.
So you're just hoping that there isn't a bad guy on the other side trying to use the credentials in real time? Seems like a bad assumption to make, especially for a site that's specifically made for pasting in JWTs.
No. No one's hoping anything. It's more like a realization that even if there's a bad guy, they're only going to be able to pwn your dev environment, which has no valuable data in it, and can be replaced with a script.
i hate how when I'm copying + pasting a url to a test or internal environment, to the browser address bar - I may have a typo in there, or an extra space. Bam! The URL just became a google search
It's even worse than that though, enter 'myintranetsite' and hit return, and you end up on a Google search page. Instead you have to enter 'http://myintranetsite'
I guess I just had a different interpretation of "no requests are made to Google". It seems "no requests" was intended to be "no search suggestion requests" and not "no requests that leak this information to google".
ahh cool.. but a binary toggle is a bit to coarse.
it would be nice if the input string contains a whitespace, it will perform the search engine query for you automatically, or allow some custom regex expression to determine whether to query search engine.
Or just have two text inputs[1]: one for url and one for search, rather than trying to overload two functions into one text input and trying to guess what the user wants.
In your search settings you can add keywords. So if you set ? as the keyword for google you can type '? foobar' into the address bar to google for foobar.
(firstly you should set a different default search engine)
It's been nice to use the Firefox setting to have a separate search bar. Your address bar will show more results from your history, which is often what I actually need. Then you can just hit the down arrow to select "Search with x" options.
My only minor quip is that your default search engine will be last in the list.
So the latest Firefox broke that for me. It seems to front-run my URLs as I type them and if the result is a 404 it sends me over to DDG with my URL as the search. :/ Really, really obnoxious but I haven't taken the time yet to paw through about:config to see if I can turn it off.
I often have the opposite problem: I'm on Firefox and try to google "FooError: Bar happened" and instead of directing to google, Firefox prompts me to select an application to open "fooerror links".
I work at a place that takes this all very seriously. No. Google search is available but monitored, skype/gitlab/bitbucket/etc are all blocked. Code formatting tools are blocked to the extent possible and people are instructed not to use them. Folks that slip up and get caught are usually written up the first time, after that they are terminated.
The only way to legitimately use tools like these from work are if they pass rigorous vendor assessment processes and rock solid contracts in place covered by nine figure E&O policies.
Do you work in a field that warrants this kind of stringent requirements or is the security team overzealous? Do these kind of rules extend to using third party libraries etc?
Both probably. Most of these rules have come as a result of regulatory action/audit findings. Efforts to dial these back repeatedly get stymied by people demonstrating the need for such checks.
Well, if you're using slack for example you'll normally find that business use it to send a lot more senstive stuff to each other than just API keys. If your slack gets breached you've pretty much got yourself a data breach that you'll need to report (if you're covered by GDPR. If you're GitHub is breached you've probably got major issues and need to do a code aduit to make sure there are no backdoors. If any part of your infrastructure is comprised, you're in trouble. If you get to the nity gritty you've got to store senstive data somewhere.
Honestly, I find it super annoying when someone is fine with me sending them a link to kibana to which the access details are in slack to see a API key but have an issue with me sending the API key to them via slack. The whole we don't trust slack but we'll send customer data to each other via it, have all of our secret business info on it, but the API key for an internal service that just outputs public info, that's too dangerous.
Oh, and then there are the people who store everything on vault or something and then give out the password willy nilly. Mate, if it's got to be encrypted then we shouldn't be giving it out to everytone. If it's got to be given out to everyone then it's not senstive data, it's just private.
For most businesses, the main thing you need to keep safe is your database.
In a project I'm working on we encrypt our secrets with git secret before sending them to GitHub.
When we want to quickly share unencrypted secrets between us we drop them as files into a server we access over ssh. That should be OK.
The gist of it is that if a secret is in clear on a server outside our organization it's not secret anymore. And yet my customer trust their cloud provider (Google) with their data.
But atleast you have an SLA with most of these services if you are using them on an enterprise level. You don't have any such SLA with Auth0 when using jwt.io.
I agree with the premise though its easy to validate if these tools are sending data to a remote dest with the network activity developer tool in any of the major browsers.
No, you'd just have to validate it when you're inputting something particularly sensitive (and I really do usually look in the network console when I'm doing something like that)
Personally I don't feel that's true if you include the significantly increased amount of research necessary to even find a decent local solution most of the time. Web apps are much more discoverable.
Then I'd change the keys in that rare circumstance. What are the chances the attacker will be able to take advantage of the leak before I have a chance to change them, particularly given an attack like this would likely be a blanket attack and not targeted?
It actually will (at least for websockets), it's just that websocket messages aren't nicely included in the timeline along with typical requests. You need to find the request that initiated the websocket and go to the details -- it shows the messages there
More important than security issues, you are building a chain of dependencies that likely is undesirable. The service goes away, and you have to change your workflow -- less likely to happen with local software. Also less likely to happen with open source software than commercial software.
What if countries similarly tracked their dependencies on other countries and foreign companies, rather than just their budget? There are some trade-offs where you want to avoid dependency even if it is more costly. Recent scandals with constructs of selling water sources and public infrastructure to lease it back cheaper, comes to mind.
I don't agree. I think online tools are great for quick-and-easy testing, and doesn't require any privacy issues. Mostly the benefit is ergonomics: I don't need to set up a bunch of stuff to do the thing I want, someone has done it because they had the exact same need.
The kind of thing I typically do with them:
- Diff two files
- Check brackets. JSON, jwt, that kind of thing
- Run code snippets in a fiddle site
- Regex
- Unit converters, HEX/decimal calculators
- Color pickers
In all cases, I'm using the tool before anything sensitive has been created. Why shouldn't I use a regex tool to figure out the exact string before I copy-paste it into my code? Or if I want to see if some particular little algorithm works, why not play around with it online, when an editor is already there and ready?
In any case, whatever I discover is part of a larger whole that was not set up in a way where this subproblem was going to be easy to test for, eg my particular use case may not make it easy to put a bunch of tests strings through regex.
I was boggling at how this could be tolerably efficient for someone, but I think I see it. My development workflow is terminal-centric, so copying out of the terminal to paste into a browser is extremely painful, whereas running 'diff', 'json_verify', etc. on local files or typing snippets into python/node/etc. from the shell is almost free. I suppose though if I were spending my day in VS Code or any kind of GUI IDE or similar, and if my workflow isn't optimized to pop open a unix shell with a keystroke for quick throwaway commands, it would be a lot easier just to C-a C-c alt-tab C-v from my GUI editor into a browser.
Pretty much. I use a mix of terminal and VS Code for development. Quite often I either don't have tools like json_verify, a hex/rgb code converter, jwt pretty printer, etc installed; or I can't remember what/how they're called. At which point its usually easier to slap it in a web search.
Witnessing developers copy-paste code from their editor into textareas on webpages to do formatting/linting/etc induces the same kind of internal cringe-factor as when witnessing general computer users use the mouse for absolutely everything and knowing zero keyboard shortcuts.
Completely unsurprised that this is coming from someone working at a bank. I saw a lot of the same working at another very large bank. Banks are extremely risk adverse in certain areas, such as using externally-hosted tooling. Certain areas of my employer, including where I worked, couldn't even use public cloud services. (And my area was additionally PCI, meaning that we also couldn't use our internal cloud services...) It's to the point where they would rather risk the project or products success, over whatever their perceived risk using external hosting.
And yet I find that their security model is just backwards. They have myriads of rules like you can't email outside org and when people need to send something to others, they inadvertently find solutions which are less secure than email. My friend interned at a top bank as software developer and he got the response due to access requirement they can't give him any project.
I do enjoy this perspective from someone who says "I have no access to production data". Typically that is used as an excuse to not practice any security whatsoever, rather than as a side-note on a security heads-up!
I go back and forth on this all the time - easier access to production means faster development, but requires more discipline. Being able to reproduce production issues without production data takes a lot of engineering - which can be hard to justify when you're a tiny shop - when I was DevOps for 100 engineers it was certainly a much simpler time justification...
Funny, I was just thinking about this the other day when I wanted to convert a bunch of JSON into YAML.
There were a number of online tools that did the conversion. The first thing I did was test them with dummy data to make sure it was fully client side and worked offline.
Can't be too safe if you're planning to run these tools on data that is protected by contract or NDA. Even if it's not, I still wouldn't want a third party site saving and potentially doing something with the data.
I mean, as a dev for last 15 years, my rule of thumb is "don't trust the internet". Don't copy paste your code onto the internet, and don't copy paste internet's code into yours.
Also, another rule that helps, what's in production, stays in production. Don't copy paste things onto your machine, don't write things down in your notebook and don't even try sending it over the public internet.
If you could sandbox part of the screen to say this thing is standalone and cannot perform any networking abilities and is cut off from everything else on the page that would be great.
The browser could recognise that tag and you get a safe space for people to copy/paste/interact with online web tools.
I'm pretty sure you can do this with WASM right now, but the browser doesn't inform the user that this is a safe space.
And if the app really is completely local to your browser like it says then make a copy and run it locally or on an internal web server for your whole team. Boom. No more "malicious updates" problem.
While this article goes into the technical detail of how an attack from jwt.io to a developer might work, I think it most importantly leaves out what the potential threat actor in this threat model is, and what they hope to achieve. The actor would need to compromise jwt.io and use it to specifically target some developer (since the data is likely in localStorage).
This kind of attack is, I think very unlikely to happen because the costs vs potential rewards / risk are so poorly balanced. A jwt.io compromise is pretty hard, and you might get nothing from it!
That said, I agree with the idea that within the web security model, people should not be pasting security-critical data into sites! But I think this is more an issue of people having access to these security-critical keys than the sites themselves. After all, they could have downloaded a malicious binary, or their laptop could be stolen. People should not be put in a position where they have security critical keys on their clipboard.
I agree with the author. I still think he makes some invalid points.
1. "Although Non-Production, these are sensitive in of themselves, as they have implementation details for our services, and as mentioned, certain things could be used outside of Capital One."
Implementation details of your services shouldn't be a part of your security. Otherwise, you are relying on security by obscurity. I agree that you shouldn't necessarily share them publicly (if just for the sake of preventing people from relying on them as a public API), but declaring them as a security breach is far fetched (I'm assuming they are not actually relevant to your security).
2. Many of the points that the author attributes to 3rd party services actually also apply to local tools, if they are downloaded by something like npm.
* it's not clear whether the code you are running is identical to the open source version of the code that you think you are running
* you have to trust a third party you have no relationship with
I've made internal tools for the exact thing, jwt inspection , which was a nice exercise and guarantees privacy. For things like JSON, using your editor/cli would be much faster. I'm not sure why a lot of developers don't add these integrations/learn the tools.
Because many users would rather not learn how to use 12 different cli commands. Many software dev can't even be bothered to learn how to copy/paste text content in the terminal or vim that spans longer than the screen.
A Web page makes things simpler, everyone has a browser. I built an internal web tool, replicated what the ones found on the Internet do, and developers in my organisation are using it daily.
I think that for identical reasons, browser extensions should be severely restricted on dev machines to a whitelist. I think it’s crazy how so many developers install Chrome extensions managed by random anonymous people which often have broad permissions and which auto-update.
I am constantly suspicious of all online tools. Every time I see someone paste a blob of customer data into an open browser tab to format the JSON I cringe; this is precisely the reason I spent a few hours to learn how to use `jq` efficiently.
Webapps like jwt.io run locally in your browser and you can monitor traffic, prevent further requests or even run them yourself to make sure of this. This post is misinformed at best.
Not sure if you fully read the article, but I mention that up until fairly recently, jwt.io was performing some metric collection, which I'm not sure many people were aware of
Open in private window, set to offline in the console's network tab, do your thing, close the window. At least I'm not aware of a way the app could send or persist the data when you do that.
If the app runs locally in the browser, you monitor network connections and whatever else for web sockets, etc, to ensure that no information is passed from your browser to the site beyond the initial http request, then, there never would be any information sent to the site to batch, correct?
I think that is correct.
Also I agree with previous posters who pointed out that for the common JWT use-case: user authentication in an SPA or website, the JWT is running in users browsers and so should not contain any sensitive information to begin with.
Agree 100%. At my company, I've deployed the open source version of JSONLint.com internally to prevent any accidental exposure of internal or confidential data.
It's always seemed to me that if someone at Google were to look at the terms entered into chrome from my company it would probably have a shocking amount of proprietary information - plans, costs, passwords etc. It's not exactly the same online tooling that the author is talking about, but anything going into your browser is probably going somewhere you can't truly trust.
For me, it's more about workflow than anything. Its much faster to run some json through `jq` than it is to reach for my mouse, switch to my browser, open a tab and google for json validator, try to find a decent one, then try to copy and paste a huge file over.
If you spend all your time in the browser anyway, it might be different for you.
This is an area where github pages shines. Unless my threat model includes github colluding with the website to steal secrets (it usually doesn’t), I can check the source of a tool once and later be able to view only patches, with a guarantee that the source I see is what’s actually running.
This was a great read and I definitely agree with the author that you have to be careful about data that you post online. This is a reason why I've created Polypad (https://mattebot.co/polypad)
If I made a list of the programs people should not be using online, some dinky data format validator would probably be among the last priorities on it, probably somewhere just above an online converter between l/100km and mpg, or an online egg timer. Pick your battles and all that.
I am now motivated to set up local tooling that is as easy to use as online tooling. I've been nervous at times about pasting data, and usually triple-check first, but that itself takes time, and one day I might be in such a hurry I don't take it.
Even if JWT.io does everything on the up and up, if that site is compromised then every single user that pastes their token is as well. If I was a malicious actor, that's a site I would target first. We should use trusted local utilities to decode these tokens.
What's a good open source JSON viewer GUI (or TUI) with Linux support (with collapsing trees, searching, filtering, diff, etc)?
I had these exact same concerns last time I was trying to look for one. I found some online JSON tools and could use them for simple things. But I hated putting the JSON I was working on into some 3rd party website, even if it didn't contain anything sensitiveness.
I try to use jq whenever I can on the command line, but having good, local, visual tools (that aren't plugins to web browser or filled with electron cancer) would be nice.
Considering that user agents are far more powerful than servers if you divide their power by the number of users, and considering that we have WASM now, one should wonder why these tools even need to run on a server.
Good security posture is all about building habits and I personally don't want myself or my team being comfortable with the idea of pasting code or JSON config files into a third party system.
If any of these online tools are sending your data to the server, don't use them. You don't know what happens with your data once you send it and accidents happen even if the service has your best interests in mind.
For the ones that are client side, such as JSON-to-go. You can save the client side code locally, set a bookmark, and use your local version instead.