Hacker News new | past | comments | ask | show | jobs | submit | eddythompson80's comments login

That already happens. That's why you see waves of bans in certain games based on evaluating/re-evaluating previous post-game data.


It's not possible because the client's representation of the information from the server is an important part of the "gameplay". In typical network applications (that are not video games) the client's interpretation/representation of what the server is sending is entirely for the usability of the application. Maybe the client displays an array as text list, a series of cards, different tabs, etc. It's just about what makes sense to a particular client/user and what makes their life easier. It's not "unfair" for me to have an email client that highlights unread emails differently than you.

In a video game, different interpretations will give different people different, unfair, advantages. There is an "agreed upon" representation from the developer of the game that's supposed to be "fair" for everyone. Displaying audible ques as visual ques is a "cheat". Highlighting a piece of information or an object is a "cheat". Auto-interpreting information you're supposed to parse yourself is a "cheat". Not every cheat is God-Mode breaking-the-physics-of-the-game cheat. Plenty of cheats are just about having a slight edge over others.

For example, your game has a minimap. Part of the "gameplay" is that you scan the minimap every X seconds to check if something is approaching. It's a situational awareness skill that some will be better at than others. A flashing red map when something approaches would give you an edge. Or the server is sending spacial audio information about where a sound is coming from. It's an auditory skill that you develop and some will be better at it than others. An arrow on the screen pointing to where the audio is coming from would defeat that part of the game. Being able to see a moving shadow in the distance, or when to break/turn in a racing game, or an odds-calculator/card-counter in a cards game, a parry/counter attack indicator in a fighting game, etc are all "skills" that you are expected to develop to become "good" at this game. They are what make these games fun/rewarding for people. Having tools to help you with these tasks would be considered "cheats" in these games.

At the end of the day, there are "cheats" that no software can catch. Having a friend sit next to you who just watches the minimap, or who calculates stuff for you, or watch a different part of the screen for you are all "cheats" that no software can catch.

Obviously there is some stuff to be done to limit the "exploit-ability" of the information. Don't have the server send information that the client doesn't need like the location of all players all the time. Have the server reject invalid moves, like a player flying when there is no flying mechanics/ability for that player. But at the end of the day, the minimal amount of information needed to play the game can be exploited by someone if they find a different way to represent it to themselves that gives them an unfair advantage.


It's a Monkey's head. Mono is Monkey in Spanish. The original author, Miguel de Icaza, is Mexican-American.


This is not really a dry cut. There are people that pay to be on a talkshow or in a newspaper or to perform in an event. Then there are people who get paid to do the same. It all has to do with the economics of the situation.

In this case, those big newspapers were betting that they drive a large portion of Facebook engagement in Canada, so they wanted a cut. Facebook didn't think so. It's trying to find the answer to the question "How many people use Facebook because that's where they get their news?" vs "How many people get the news because they happen to be on Facebook?"


I think it’s more apparent that this shows “for how many people Facebook is the internet to them”.


> Its major problems aren't technical but social, and no new technology will solve those.

Really? Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage? I view that problem as deeply technical. Unless if by "social" you simply mean everyone should become an experienced sysadmin. (or the slight variation of, everyone should know an experienced sysadmin who's willing to run their application for them for free)

Take something as mainstream as social media. Imagine a world where Facebook/Twitter/TikTok/YouTube/Reddit/HN/etc worked (seamlessly) like bittorrent. An application on your machine when you run it, it joins a "Facebook" network where your friends see you online through their instance of the application. Your feed/wall/etc is served to them directly from your machine. All your communication with them is handled directly between the 2 (or 1000 or millions) of you. No centralized server needed. You can easily extend and apply this majority of centralized application today. The only ones I can think of where this wouldn't work would be inherently centralized services like banking for example.

There are already plenty of p2p networks that show that this is a viable solution. Bittorrent, soulseek, bitcoin, etc.

All the problems you will run into however to make this as seamless as just connecting to facebook.com are purely technical. The initial big hurdle is seamless p2p connectivity. That is without port forwarding, dynamic dns, and requiring advanced networking, security, and other sysadmin knowledge from every user. Next would be problems like what happens when the node is offline? What happens to latency and load if you need to connect to thousands, hundreds of thousands, or millions of machines just to pull a "feed"? How is caching handled? How are updates/notifications pushed? How do nodes communicate when they are wildly out of date? Where is your data stored? How do you handle discoverability, security, etc.

All deeply technical problems. Most are solvable, but you're gonna have to invest a significant amount of effort to solve them one by one to reach the same brain-dead simple experience as a centralized service. The fediverse has been trying to solve just a small subset of these problems for over a decade now, and the solutions still require a highly capable sysadmin to give users a similar (or only slightly worse) experience than twitter.com.


> Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage?

Not quite. The internet _is_ decentralized. What made the web so centralized from the start could partially be the result of lacking tools that made publishing as easy as consuming content. I.e. had we had a publishing equivalent to the web browser, perhaps the web landscape would've been different today. You can see that this was planned as phase 2 of the original WWW proposal[2] ("universal authorship"), but it never came to pass AFAIK.

So you could say that the problem is partly technical. But it's uncertain how much this would've changed how people use the web, and if companies would've still stepped in to fill the authorship void, as many did and still do today. Once the web started gaining global traction in the early 90s, that ship had sailed. People learned that they had to use GeoCities to publish content, and later MySpace, Facebook and Twitter. These services gained popularity because they were popular.

There have been many attempts over the years to decentralize the web, but now the problem is largely social. As you say, we've had the fediverse for over a decade now. How is that going? Are technical issues still a hurdle to achieve mass adoption, or are people not joining because of other reasons? I'd say it's the latter.

Most people simply don't care about decentralization. They don't care about privacy, or that some corporation is getting rich off their data. They do care about using a service with interesting content where most of their contacts are. So it's a social and traction issue, much more than a technical one. The only people who use decentralized services today are those who care more about the technology than following the herd. Until you can either get the average web user interested in the technology, or achieve sufficient traction so that people don't care about the technology, decentralized services will remain a niche.

There is another technical aspect to this, though. Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data. Many things will need to change on the operational side before your decentralized dream can become reality. I think this landscape would've also been different had the web started with decentralized principles, but alas, here we are.

[1]: https://info.cern.ch/hypertext/WWW/Proposal.html


> As you say, we've had the fediverse for over a decade now. How is that going?

Convenience trumps everything. All the parts of the iPhone existed for a few years before it -- especially PDAs with touch pens -- but what made the iPhone succeed was putting everything into convenient and easier package.

The amount of time worked on thing X has almost zero correlation with its adoption, as I think all of us the techies know.

> Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data.

While that is true, let's not forget half-solutions like TeamViewer's relay servers, Tailscale / ZeroTier coordinators, and many others. They are not a 100% solution but then again nothing is nowadays; we have to start somewhere. I agree that many ISPs would be very unhappy with a truly decentralized architecture but the market will make them fall in line. I have no sympathy for some local businessmen who figured they will run with tens of millions with $50K investment. Nope, they'll have to invest more or be left out.

So there would be a market reshuffling and I'm very OK with it.

---

But how do we start off the entire process? I'd beet on automated negotiation between nodes + making sure those nodes are installed on much more machines. I envision a Linux kernel module that transparently keeps connections to a small but important subset of this future decentralized network and the rest becomes just API calls that would be almost as simple as the current ones (barring some more retry logic because f.ex. "we couldn't find the peer in one full minute"). I believe many devs would be able to handle it.


The Holochain project has invested the last 5 years, solving each of these problems…

You can now build Internet-scale distributed systems, with or without requiring centralized (eg. DNS, SSL certs, etc.).

In other words, massively distributed apps without any means for centralized authorities to stop them.


Ping uses ICMP. Windows blocks ICMP by default, so yes `ping <windows-host>` doesn't work by default. Is your system your father was trying to ping a Windows system as well?

The other thing to check is if he was running another VPN on his machine at the same time. Running multiple VPNs at the same time (both Windows and Linux) requires extra fiddling to map the routing correctly to prevent their rules from overlapping/breaking each. https://tailscale.com/kb/1105/other-vpns


No other VPN, but my windows machine firewall is on and it pings fine.

Anyway, tailscale still has more to go. Inviting someone to your tailnet doesn't seem to be the same as adding a machine yourself.


Oh yeah, forgot to mention. On a given tailnet, users can only reach their own machines. Each machine that joins the network has an “owner” shown under the machine name in the admin portal. By default users can only reach their own machines, not everyone’s else’s. As the network admin you can manage that through the ACLs tab.


And this is why tailscale isn't solving the fundamental issues of connectivity. Thanks and cheers eddythompson80.


What is the alternative, here? Letting all machines on a tailnet talk sounds like a security issue. Maybe a better onboarding flow that prompts you to set ACLs when inviting a new user?


It seems you're assuming the firewall or my machine configuration was the issue rather than a tailscale "sharing" feature issue.

I am, among other things, a network engineer, and previously I shared my tailnet with my brother's windows machine by logging him into my account directly, and it worked flawlessly.

I want TS to win, but they've got product and engineering work to do if they're serious.


There is also IT-Tools and CyberChef. Selfhost-able projects to create your own instance of "online utilities tools"

https://gchq.github.io/CyberChef/

https://it-tools.tech/


Oh these are great! I guess my differentiator is that i want to be on every medium hence the vs code extension and the desktop app for offline usage.


CyberChef supports offline usage, just save the page!

If you want a "quick" way to add a bunch more operators, all of ours are available via the 'cyberchef' NPM package[0] and the license permits embedding into other applications.

[0] https://github.com/gchq/CyberChef/wiki/Node-API


It's a kernel mode driver. There aren't layers in kernel drivers. Any kernel module/driver can crash your system if it wants to.


You're taking about how things are, the comment you're replying to is talking about how things could be. There's not a contradiction there.

Originally, x86 processors had 4 levels of hardware protection, from ring 0 up to ring 3 (if I remember right). The idea was indeed that non-OS drivers could operate at the intermediate levels. But no one used them and they're effectively abandoned now. (There's "level -1" now for hypervisors and maybe other stuff but that's besides the point.)

Whether those x86 were really suitable or not is not exactly important. The point is, it's possible to imagine a world where device drivers could have less than 100% permissions.


I absolutely loathed a lot of Microsoft products for this simple thing. VSTS/VSO/Dev Ops/whateverNameTheyHaveNow, Sharepoint, etc were absolutely atrocious at this. Here is a deep link that’s 700 character long, with a couple of dozen base64 query strings and nonsensical path. “What’s the problem? Can you use a url shortening service? URLs are long nothing we can do about that” fuck me.

Back in 2014, my team had an internal tool to view the state of resources in our system. All resources and their states were stored in a SQL database. Yet, the web app they developed was a SPA (before the invention of routers and stuff) and it never updated its url or supported deep linking. Whenever you wanted to send someone an email or an IM about an issue with a specific resource, you had to tell them “go to X tool, search for Y, click on Z -> W -> M -> O -> K, then you’ll see the issue there”. I found that so fucking infuriating. Why can’t I just use an https://X.com/Y/Z/W/M/O/K link to share that deeply nested state? When I brought it up multiple times i was always told “it’s not a priority and it’s not that big of a deal”

One time we were given 2 weeks to work on whatever we thought needed fixing. I decided to build an alternative that supported deep linking. But I also decided that all deep links should accept an `/api/` prefix that just returned the content in JSON format. It was just a hit with everyone in the team/company that the usage of X tool almost diminished over night even though my tool was much more rudimentary and didn’t have all the features that tool had. Nonetheless, turns out most people just wanted an easy way to share links rather than a “really powerful SPA that lets you dig down and investigate things”.

A month later the team that worked on that tool X announced that they now support deep links in a huge email to the whole company. Yet they thought the simple feature of returning JSON data on `/api/` prefix is irrelevant. 5 years after, my tool’s UI became obsolete but the actual service was promoted to a “vital internal service” because so many other teams built automation around the `/api/` prefix URLs and that team had to take that code and maintain it.


I've also found that kind of situation.. I've learned that in an office environment people are often content using a tool and following the established procedure and are not considering it could be better -- even if you ask them! Until you show them something better....

Good job :) Hope you at least got some recognition out of your efforts


one reason react server components make me uncomfortable (they do have their merits) is they encourage commingling of api and presentation. and we all know that presentation layers always fail to design for some user/usecase you just cannot yet foresee


What is SPA?


It stands for Single Page Application. It's a web application that works by loading a big chunk of JavaScript and using that to render every "page" of the application, rather than providing links to different pages for different parts of the app.

Think Trello (SPA) compared to Hacker News.

These days well written SPAs can use the HTML5 history API to provide proper URLs to different parts of the application, so linking and bookmarks still work.

Historically this hasn't always been the case, and even today poorly written SPAs may fail to implement proper linkable URLs.


You're simply missing the point.

> I seriously don’t understand why people in 2024 insist in reproducing these core services but badly and at great expense.

Because those users have a flow that they like and don't want to change it regardless of all the pitfalls in that flow. "It works, there is just this one problem. Why don't you invent a solution for this one problem for us. I don't want to use S3, or Azure or whatever. Just give me an sftp endpoint I can upload files to". Oh CSV doesn't have a standardized way of parsing escapes? why don't you handle a `# escape=\` or `# doublequote` or `# singlequote` or `# ignore-new-lines-in-the-middle-of-a-string` and we will use that.

Heck, it's 2024, and we have a fully automated, source control (github, gitlab, bitbucket, etc) integrated, S3/Azure Storage/GCP Buckets integrated, Dockerfile and Makefile aware, application deployment service and we still have a non-trivial number of users asking "Why can't I just drag and drop my .php or .py files over fileZilla to deploy my code? I don't want to use git or docker or S3". Oh, you can't version our code updates when we just drag 100 files over sftp? why don't we update a `version.txt` file when we're done updating then you count that as a version?


> Because users have a flow that they like

https://www.bbc.com/news/articles/cx82407j1v3o

Some of these solutions were never good, but build a momentum of “we’ve always done it this way”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: