Friendly reminder that you can disable automatic downloading of remote images on emails on your phone and other email clients. People use it to track if you've read or not the email, engagement etc.
1. Write a watcher that periodically reports whether you’re actively using a coding-related app (Terminal, VS Code, etc.) to a server you control, and serve a “coding”/“not coding” badge on your profile page; (could even detect which project you’re working on, and display that;)
2. Write an Apple Watch app that periodically reports whether you’re asleep, and serve a “sleeping”/“awake”/“unknown” badge on your profile page.
> A cemetery in Slovenia just created a prototype digital tombstone that allows mourners to broadcast videos and images of the deceased in place of the traditional grey grave marker. The weatherproof 48-inch screens cost about $3,200 and have been built to resist vandals, Reuters reported.
Something like this was my first programming project when I was 12...
I had built a web service where you could get a unique URL for a .jpg and use it as your signature in forums. On the other end you had a Winamp plugin that you had to install that updated the currently playing song. The php file that served the jpg just used imagemagick to build the image. So you could display what song you were listening or what the last song you were listening to was to everyone in the forum. You could pick your theme as well for the image. No signup required.
Man this brings back memories... and also makes me feel bad that after all these years and a college degree I was probably smarter, faster and more productive as a 12 year old.
This was just hosting a php +MySQL db on a free cpanel host (I think 110mb.com) and none of that isomorphic lambda react firebase vercel kubernetes jazz and it still worked beautifully.
Nice! Mine was similar, for a browser game called Tribal Wars.
Same stack; I (ab)used the referrer header to determine who was requesting the image (from a village ID in the header). I'd embed it in tribe forum posts to mark people as read, display dynamic stats for points, tribes, villages and such. Sometimes I'd do shenanigans like "[current person] is a noob". It really threw people in a loop as nobody had developed anything similar ;)
> Write a watcher that periodically reports whether you’re actively using a coding-related app (Terminal, VS Code, etc.) to a server you control, and serve a “coding”/“not coding” badge on your profile page; (could even detect which project you’re working on, and display that;)
Write an app for your home automation system that reports whether you're around, and serve a "defended"/"safe to rob my house" badge on your profile page.
Honest question from someone about to graduate - why do you have a 10 day email backlog as a recent grad? Are you getting that many recruitment emails?
Not sure that sort of low level access is possible while the app is backgrounded. However, automatic sleep trackers have been available without official API since what, watchOS 2? So maybe.
Personally, I self-host Gitea. It works extremely well.
It has webhooks that tie nicely into my (also self-hosted) Drone (CI) and Mattermost (f/oss self-hosted Slack replacement) and CapRover (self-hosted Heroku) setups.
On a $300/mo server from Hetzner it can run git, CI, image registry, Mattermost, email, wiki, Discourse, Mumble, and host a bunch of apps (dev, stage, prod) without breaking a sweat. I set up the filesystem so it stores all server state for all apps and stuff in a single directory tree (thanks to Docker you can mount whatever source directories you want into a container at any path).
A continuous rsync job in a loop makes sure that that directory tree is always synchronized onto another $20/mo VPS elsewhere. (Failures on that backup machine/script are reported realtime via webhooks back into Mattermost.)
It's a really robust system (for a single machine) and thanks to everything being containerized, can be easily replicated from the backup host back onto a new machine in under an hour or so in the event of crash/disaster.
For 90% of small businesses, even the $300/mo server is significant overprovisioning/overkill.
Were webrings implemented using image tags as well? My memory is faint, I remember having to copy/paste quite large a snippet of code, but don’t remember the actual implementation.
It’s probably infinitely more simple than I think it was.
There was a whole block of pre-formatted HTML you would have to insert into your website. When you clicked on a button to navigate the ring it would like to a cgi script on the "owners" domain to take you to the next site.
Edit: Down this [1] page a little bit you can see an example of the code you would have to copy/paste
GitHub's premise has always been "social coding". They should indeed become more social, that may or may not remind you of Facebook but I personally have wanted more social features out of GitHub for a while.
It depends what "social features" you are referring to. I go to github for code not pictures of people's cats, meme, etc. If github became more like a Facebook feed where people just shit post their lives all day it would become useless from my imo.
I mean, they built and added camo like a decade ago to allow your reaction gifs in code review to work over HTTPS. The shitposting is now a fact of life
Which ones for example? On the one hand, I sometimes feel the same, but OTOH I don't want this thing to get so bloated with "social" features instead of features to get things done :)
Personally I don't see much reason for this to be implemented via a git repository (rather than a text field in a database, like the existing Bio text), other than "we're GitHub, everything's a git repository, so why not".
At the moment, it's a single-file repository. Perhaps they have some ideas for other things that could be served from there too.
Weirdly at the moment it's required to be a public repository. This seems counterproductive: if it's my personal information blurb I don't much want other people looking at its history and I certainly don't want them forking it. (Are there any non-malicious reasons for doing that?)
To be honest, I'm not too concerned. Making it a git repo makes the history more readily available, but a bad actor could still get it from internet archive or other archive sites. And if the concern about forking is that someone could impersonate you, they could do so with a few extra steps by just looking at the rendered HTML of the README.
I know there's something to be said for making malicious behavior more difficult to achieve, but in this case the amount of extra work someone would have to do is minuscule if this weren't implemented as public repos.
> This seems counterproductive: if it's my personal information blurb I don't much want other people looking at its history and I certainly don't want them forking it.
Taken to its logical conclusion, this would mean you're opposed to the existence of web archives (which you probably aren't).
Which exact headers are being leaked by GitHub? Depending on what they leak, this could be something only useful for counting hits (if all they leak is the implicit time of the request), or for fully tracking users (if they leak Etags, original IPs, etc).
It's too bad the author removed the badge from their Github profile. It would be interesting to see the effect of this post on profile views.
Github proxies all image requests, like GMail as a security measure. The cache busting technique the author refers to only ensures the security proxy doesn’t cache the image data, which would prevent the counter going up
This isn’t meant to last. GitHub has been proxying markdown images for years to avoid exactly this. Probably they haven’t gotten around to fixing that here yet… but this surprises me.
The trick here is preventing the proxy from caching the image. The request is then still proxied but has a 1:1 correspondence to requests for the image. But now that it's public knowledge that this works, I agree that GitHub will close the hole.
I've been using this exact trick to track hits on my github repos for years already. If/when Github gets around to fixing it, I've already had it working for many years so even if it wasn't "meant to last", it's already lasted quite a while and I have no regrets. If they stop allowing it, then so be it.
Another reason proxying requests to external images is important is that some browsers will display an HTTP Basic Auth dialog if the image request responds with the right kind of 401. It’s confusing and sort of disconcerting to see a basic auth dialog pop up from a random site if you’re just browsing Github.
I definitely agree, and believe that this sort of thing sets a very negative example for young folks coming into the industry. Chasing views, starts, and shares is a sure path to burnout and mental health problems that already seem to plague very online tech people.
*[edit] I don't mean to shit on the OPs work or the coolness factor here. I'm just a bit put off by the idea of creating new anxiety levers for people to pull that don't offer much if any real value.
Bad practice to assume your ethos is the only one at play.
I'd hardly call taking a look at _where_ people are coming from to view your Github profile a "vanity" thing. A view of referrers could be extremely useful in tuning one's developer presence and visibility. That could come in handy when job-seeking, or trying to spread the word about your services and/or company. Both of which Github is widely used for.
Yes. I was sought and employed specifically because of the work visible on my Github profile from 2017 - 2019. My body of work on Github directly contributed to my candidacy for the job I currently hold.
Definitely possible. Could effect repos that are pinned, content of profile blurb on the left, etc. If I had access to where folks were coming from, I could gauge how visible or discoverable my profile was from other sources, my blog, etc.
I agree, but I think such information should be available to all people.
Some people will be doing this anyway, and they can hide this by using a transparent pixel.
Seconding this.
I don't know if it's a fair comparison but Instagram is removing publically-visible like counts because of their associated consequences, and embeddable banner services for tracking GitHub profile views seems inevitable.
It is still useful enough to know if someone looked at your github repos when you're looking for a job. Typically this is useful when a visitor also hits other sites related to you like your Linkedin, or gets a link to your github from a printed resume, etc. It lets you know someone is interested and during a job search that can be useful information. It doesn't have to be only about "vanity", but maybe you're applying some kind of narrow view on the rest of the world.
why the need to make everything "social"? I doubt most of the population has levels of insecurity enough to be continually thinking where to implement such things.
Social isn't necessarily bad. Trouble arises when social is combined with reinforcement learning algorithms to drive engagement. Jaron Lanier makes a good case for not using social media that use reinforcement learning algorithms to drive engagement
> The results are tiny changes in the behavior of people over time. But small changes add up, like compound interest. This is one reason that BUMMER naturally promotes tribalism and is tearing society apart, even if the techies in a BUMMER company are well-meaning. In order for BUMMER code to self-optimize, it naturally and automatically seizes upon any latent tribalism and racism, for these are the neural hashtags waiting out there in everyone’s psyche, which can be accentuated for the purpose of attention monopoly.
BUMMER is an acronym
> Seems like a good moment to coin an acronym so I don’t have to repeat, over and over, the same account of the pieces that make up the problem. How about “Behaviors of Users Modified, and Made into an Empire for Rent”? BUMMER. BUMMER is a machine, a statistical machine that lives in the computing clouds. To review, phenomena that are statistical and fuzzy are nevertheless real. Even at their best, BUMMER algorithms can only calculate the chances that a person will act in a particular way. But what might be only a chance for each person approaches being a certainty on the average for large numbers of people. The overall population can be affected with greater predictability than can any single person.
Similar dynamics show up in any forum where people compete with others for points, e.g. karma.
> According to my tests, profile views do not count as repo views, and I hope, in the future, we will be able to see traffic stats in the repository that hosts the README file.
> Of course, there will be some extra requests from bots, but having such statistics is better than nothing.
I think this is a bit understated. A simple number coming off the image requests is going to be very unreliable. On average, you will not be able to discern bots and real users.
A plotting of the hits over time may be more usable.
Interesting, when I create a repo with my profile name, it shows this text:
> You found a secret! philshem/philshem is a special repository that you can use to add a README.md to your GitHub profile. Make sure it’s public and initialize it with a README to get started.
Everything old is new again. I used this same technique on MySpace 15 years ago. I wonder if Github will try to filter that out the same way MySpace did back in the day.
Memories!
I hid my top 6 and made a 'random friend' thing. It was an image i.e. http://whatever/random-friend.jpg that linked to http://whatever/forward-to-friend. The server served up a random profile picture - and crucially a cookie header that set the friend-id as a cookie that forward-to-friend would use to send you to the corresponding profile.
I was _so_ proud of it at the time...
I did some research before writing an article. GitHub started proxying images in 2014, and there are a lot of repositories that use this technique to keep their stats. I think GitHub is OK with that.
>Unlike GitHub, most of them don't even bother proxying the image to hide IP, referrer, and browser agent. If you want to allow external images on your site, you must proxy them and hide everything about a person who requested it.
> A person with bad intentions can trick a victim into opening your profile that looks completely legit and detect his IP and a browser.
Can you explain this in more detail? Given a profile host that doesn't proxy, how does that attack work?
1. your browser opens image from external server (in this step the server gets your IP and potentially user agent as that's how browsers communicate with servers)
I can't remember much about images - but basically filtering out question-marks and equals symbols (i.e. ...thing.php?some=value) in a lot of places. I remember getting Facebook and when they first started their API it was a bit better thought out, so they loaded images on the user's behalf.
Some great reading about 'hacking' myspace at the time: https://samy.pl/myspace/
I did this for Facebook back in the day back when you used to be able to post your own images! Back in the early, early days. I got a few blogs to write it up because it tracked the referring URL so you could send out special links to people and then track how often they viewed your profile.
I miss being young and having a seemingly infinite amount of time on my hands...
This is interesting because this didn't used to be possible from a repository's readme, I believe, because linked images would be cached and served from GitHub's CDN. I wonder what's changed (if it indeed has)
Thanks for sharing. Though it is exciting to see so many ideas here, wouldn't it be great if GitHub solves tracking use case in the platform rather than we all hacking different solutions?
> The rest of the story is simple — you need to store a counter in the database and return an SVG image with the number of views as a response. Of course, there will be some extra requests from bots, but having such statistics is better than nothing.
"The rest of the story is simple" uh, no that's not simple at all! You want me to register a domain, get a hosting provider, create a database, and create logic to listen for requests with anti-bot detection? Please don't say this is simple.
> As part of recent design changes, GitHub has introduced READMEs for profiles
This is a weird new myth that might just highlight the difficulty of discoverability. GitHub has had user profile READMEs for years. The difference is just where the README shows and what the name of the repo is supposed to be.
You could already create a "<username>.github.io" repo that governs what shows up when visiting https://<username>.github.io