Hacker News new | past | comments | ask | show | jobs | submit | stockhorn's comments login

I would love to see a reverse engineering approach of the original mcu firmware. So far everything I found seems to cover the IR protocol only. The MCU seems to be unlabeled (https://github.com/sueppchen/PixMob_waveband/wiki/MCU )

Now I can finally parse (X)HTML with Excel ;)

Now I need to build a regex for zalgoifying text in Excel.

An article from 2013 with an adobe photoshop version 1.x from 1990....


I'm pretty sure half of that code is still running in WASM on photoshop.adobe.com


You mean current photoshop includes pascal code?


Tools used for art often get irrationally preserved for the sake of it. For example I have had a conversation with more than one person (well 2 but still) who believed unironically that the wiring inside vintage guitars and amps must be coated with asbestos insulation or it would change the tone/texture of the sound.


Don’t crush that in a hydraulic press.


What's wrong with Pascal, apart from the ability to hire developers for it?


I hated the dialects of Pascal we were using at school in the early 1980s because they didn’t really support systems programming but after I got a 286 machine I got into Turbo Pascal which did have the extensions I need and that I preferred greatly to C but I switched to C in college because I could write C programs and run them on my PC or on Sun workstations with a 32 bit address space.


Turbo Pascal and later Delphi were really nice, but I guess in the same vertical C won due to its UNIX legacy.

You can pretty much transform 1:1 between C and Pascal code.


Writing in Pascal itself is a Job Preservation Pattern


Nothing wrong, just surprised


I would not be surprised if it does. Photoshop is big and has a lot of legacy.


I have a feeling that much of it was translated to C or C++ at some point for portability and maintainability reasons. There are several automated Pascal to C translators out there, such as the following...

http://users.fred.net/tds/lab/p2c/

Also the languages are similar enough that a programmer with knowledge of both could translate it manually without too much difficulty.


Typically TeX is translated from Pascal to C too, via web2c.

But there also is a Pascal to WASM compiler out there, which was written specifically for TeX:

https://github.com/kisonecat/web2js

TeX itself is only about 500kb of wasm, uncompressed, but the memory images with LaTeX loaded are quite a bit larger.


It was transpiled to C and then C++ many years ago.


I've also tried to optimize c++ compile times on large projects a few times. I never got IWYU working properly and I always hated the fact that I still have to care about header files at all. Then I switched to doing rust full time, which made all the fiddling with header files obsolete. This felt amazing. But now I'm facing the same problem, slow compile times :). Only this time I have to rely on the compiler team to make the improvements and I cant do much on my side AFAIK.


Well that's not quite true. You can do a few things: 1. Reduce dependencies and features of dependencies 2. Use a faster linker like mold 3. Use a faster compiler backend like cranelift(if possible) 4. Use the parallel compiler frontend(again if using nightly is possible) 5. Use sccache to cache dependencies But i do get what you mean. Especially in CI the build times are often long


Split up crates so your compilation units are smaller.


I had to reload the page on Android FF, then I got past "Loading"


I feel like release tarballs shouldnt differ from the repo sources. And if they do, there should be a pipeline which generates and upload the release artifacts....

Can somebody write a script which diffs the release tarballs from the git sources for all debian packages and detects whether there are any differences apart from the files added by autotools :)?


Did I read this correctly and audio fingerprinting is mainly about identifiying the used browser version and OS or laptop, but it cant identify end-users in a stable way?


Yeah, it doesn't tell a website who you are. Instead, it allows them to recognize you again when you come back to visit again, even if you clear cookies.

This is particularly a problem with big advertiser networks because they can track you across many sites you visit, even if you disable third-party cookies.

It has positive uses too, like preventing click fraud and concert ticket arbitrage.


>Instead, it allows them to recognize you again when you come back to visit again, even if you clear cookies.

I don't think that's what stockhorn said. stockhorn said it can only identify a what browser and OS and laptop model you're using. Someone else with the same browser, OS, and laptop model would have the same fingerprint. So audio fingerprinting couldn't precisely recognize you again when you come back again.


> Someone else with the same browser, OS, and laptop model would have the same fingerprint.

the collision rate of their ids is stated to be 0.05%

what they do is basically collect a lot of signals from the browser (audio processing stuff being only a part of it) and then compute an id on the server.


Browser, OS, laptop joined with IP looks like a pretty good ID


IP is a pretty good ID...


NAT really.


I see what you did there…


Not if you’re behind something NAT’d, which is especially true on mobile.


Still, parent does state a pretty big concern when looking at this from a higher vantage point.

These practices and their repercussions aren't self contained.


My phone running Firefox for Android produced the same results as the sample data for Firefox on Windows which does seem to fit with this largely being a browser identification scheme


I think that is correct, but it still seems like an amount of leakage that could be further correlated with other another trick.

There was previously a site which could indicate how globally unique your environment was (some combination of screen size, user-agent, fonts?, etc). Locking down to a specific hardware+browser configuration probably does a lot to remove anonymity.



Not the one I used, but this one actually looks better.

Just being Linux + Firefox is terrible for blending into the herd. Let alone everything else that leaks (having a desktop + GPU + good monitor basically destroys all remaining hope).


Probably was EFF's panopticlick, which has evolved into https://coveryourtracks.eff.org

The about page has some history https://coveryourtracks.eff.org/about


I gave the webapp and android app a quick test. It looks nice, but it is still not as smooth as google photos or similar. E.g. the photos are not preloaded fully and I still need to wait a few milliseconds to get a photo fully rendered. (Also there is some white flickering when swiping through the photos on android)

I wonder how the big players do it. Of course they have a lot more manpower, but maybe the also have some clever caching/rendering lib..?

Kudos for doing this and opensourcing everything. I really appreciate this and I might stick around.


Thanks for checking us out!

We currently keep 2 versions of a photo - one the original, and the other a downscaled copy to be rendered as the thumbnail.

Unlike non-e2ee providers, we cannot transcode and serve optimised images on the fly, when it's faster to downscale than serve the original image over network.

What we could do is

1. Intelligently preload original photos when their thumbnails are in scope

2. Store an extra version of the photo, whose resolution is between that of the thumbnail and the original, and perform #1 over those

Sorry about the flicker, will fix it.


Ah this totally makes sense! Do you use progressive JPEGs already when opening the full image?

If only there was a progressive encoding which lets you get a perfectly downscaled (not blurry) version of an image by just reading parts of the file.


TIL about progressive JPEGs, will check it out, thanks!

We currently render the thumbnail first (in most cases it's available locally) and then replace it with the original image once fetched from remote, while replicating the zoom and pan operations (if any) performed by the user. So it is "progressive" in some sense, but goes from something like 20% to 100% in one shot.

("20%" is a simplification, the actual value will depend on the resolution of the original image and that of the generated thumbnail, the latter is fixed)


I once submitted a bugreport to spotify regarding a problem with the chromecast integration. The support-agent asked me the following questions in turn, which I all followed:

- Can you reinstall the app and try again?

- Can you describe step by step how to reproduce the bug?

- Can you make a screencast showing the problem?

- Can you factory-reset your phone and check if the issue persists?

I've never heard back after doing the last step. They probably didn't think I would report back again. (Thanks god I had an old phone lying around...)

Needless to say the problem is still there today...


> Can you factory-reset your phone and check if the issue persists?

This is a big red flag for organizational dysfunction. When the QA team puts up a big roadblock to even filing a bug, wants to play 20 questions for every bug report, it's a way to just avoid taking bug reports.

But why avoid the bug reports? Someone in QA is being measured on them, or a 2nd layer (the engineers) are being pressured by a 3rd layer (management) to ship new features at high quality and not "waste time" fixing bugs. Maybe the bug reports won't be fixed anyway.


Wow... Crazy this boardgame of yours. I'll definitly take a deeper look at this :). Thanks


thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: