Hacker Newsnew | past | comments | ask | show | jobs | submit | DDerTyp's commentslogin

Obvious comment to various Amazon price trackers like https://keepa.com/



Is it accurate these days? I recall there was some point where Amazon locked them out of deal prices etc, and I guess it also doesn't account for vouchers that apply at checkout?

I was looking at one of the GMKTek Ryzen AI Max boxes and it's overpriced by ~£1000 with a ~£1000 voucher to apply at checkout; or is this part of some other "scheme"?


I use these for various European Amazon (often cheaper to buy from the Amazon next door - and shipping is still free) and it's astonishing how bad they are. It's the new Amazon systems of vouchers (in € or %), temporary offers etc that these sites can't keep up with. I saw some products before prime day with a 20% voucher that were more expensive on prime day (10% reduced but no more voucher) but the price trackers showed them as cheapest ever.

Honestly at this point I compare rather with bol, idealo, guenstiger and tweakers and am then usually better off not buying from amazon.


I wonder what the scheme is here though; Why overprice something by ~50% and add a voucher for the same amount off; is it some sort of anti-deal tracking thing?

It always puts me off from buying something expensive because I wonder if somehow I could end up worse off (in terms of returns, or warranty or something) because I bought something that was X but only paid Y due to the voucher.

Realistically, I probably wouldn't buy a high-end computing product from Amazon anyway, unless is was notably cheaper than the specialists I'd normally buy from. Something like a £2000+ mini PC isn't the sort of thing the typical UK PC retailers I buy from would stock.


I think some of the "coupons" are only usable once, so could be a way to price more fairly when there's limited inventory or something?

Like you can buy one at regular price, but 2nd and beyond get marked up 100%?


Ah that would make sense. I suppose if there's limited stock of them in the Amazon warehouse, it prevents people buying up lots of them and not leaving any for other people.


> Why overprice something by ~50% and add a voucher for the same amount off

Preying on human psychology and the non-rational consumer. People are emotional impulsive creatures, and the marketing department knows this. If you think you're winning in some secret way only available to you (you're so special!), for a limited time only (so don't miss out!), it taps into something primal that turns off the rational part of our brains, and suddenly we're buying something we don't need at a price higher than we'd ideally pay.


It's for the dopamine hit of "saving money" or feeling like one has, even if you haven't


Geizhals factors in vouchers.


I think they were locked out during Covid. Something something stock level.

A lot of dodgy shit happened during Covid. Like Google and iOS rolling out the largest tracking network in human history. Your phone sends a Bluetooth beacon every 30 seconds and any phone in the vicinity will pick it up and vice versa. Because of Covid. Track and trace. Guess what, it's still happening.

They said only government health institutes would get access to it. Right. Right?


Why not those that track many different stores? For many things Amazon never has the best price.


Can you name anything? One of the biggest complaints I've read about Amazon from the seller side is that they can be punished if they offer a better price somewhere other than Amazon.

The only items that seem to skirt this is repackaged items/junk from AliExpress, but they are or are pretending to be different companies.


I doubt that's legal here in Germany. Well known sites are https://www.idealo.de/ and https://geizhals.de/. But https://www.google.com/shopping should work anywhere no?


Noticed that after ten mins, contacted author immediatly and he seems to be working on it / restoring his account / removing malware on published packages.

Kinda "proud" on it haha :D


Doesn’t npmjs do things like signing, pinning, and yanking packages, like rubygems?


Yes


One of the most insidious parts of this malware's payload, which isn't getting enough attention, is how it chooses the replacement wallet address. It doesn't just pick one at random from its list.

It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.

This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.

We did a full deobfuscation of the payload and analyzed this specific function. Wrote up the details here for anyone interested: https://jdstaerk.substack.com/p/we-just-found-malicious-code...

Stay safe!


I'm a little confused on one of the excerpts from your article.

> Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3

As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx

And with that, packages usually shouldn't be getting updated in your CI pipeline.

Am I mistaken on how npm(/yarn/pnpm) lockfiles work?


Not the parent, but the default `npm install` / `yarn install` builds will ignore the lock file unless everything can be satisfied, if you want the lock file to be respected you must use `npm ci` / `yarn install --frozen-lockfile`.

In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.


Not a web guy, but that seems a bonkers default. I would have naively assumed a lockfile would be used unless explicitly ignored.


Welcome to the web side. Everything’s bonkers. Hard-earned software engineering truths get tossed out, because hey, wtf, I’ll just do some stuff and yippee. Feels like everyone’s stuck at year three of software engineering, and every three years the people get swapped out.


> every three years the people get swapped out

That's because they are being "replaced", in a sense!

When an industry doubles every 5 years like web dev was for a long time, that by the mathematical definition means that the average developer has 5 years or less experience. Sure, the old guard eventually get to 10 or 15 years of experience, but they're simply outnumbered by an exponentially growing influx of total neophytes.

Hence the childish attitude and behaviour with everything to do with JavaScript.


Good point! The web is going through its own endless September.

And so, it seems, is everything else. Perhaps, this commentary adds no value — just old man yells at cloud stuff.


The web saw "worse is better" and said "hold my beer"


We didn't get locking until npm v5 (some memory and googling, could be wrong.) And it took a long time to do everything you'd think you want.

Changing the main command `npm install` after 7 years isn't really "stable". Anyway didn't this replace versions, so locking won't have helped either?


You can’t replace existing versions on npm. (But probably more important is what @jffry mentioned – yes, lockfiles include hashes.)


> Anyway didn't this replace versions, so locking won't have helped either?

The lockfile includes a hash of the tarball, doesn't it?


It does, the answer to my question was no.


TIL: I need to fix my CI pipeline. Gonna create a jira ticket I guess…

Thank you!


Sorry, I had assumed this was what you were doing when I wrote my question but I should have specified. And sorry for now making your npm install step twice as long! ;)


npm ci should be much faster in CI as it can install the exact dependency versions directly from the lockfile rather than having to go through the whole dependency resolution algorithm. In CI environments you don't have to wait to delete a potentially large pre-existing node_modules directory since you should be starting fresh each time anyway.


I've seen pipelines that cache node modules between runs to save time, but yeah if they're not doing that then you're totally right.


Yeah, I think I had made the assumption that they were using `npm ci` / `yarn install --frozen-lockfile` / `pnpm install --frozen-lockfile` in CI because that's technically what you're always supposed to do in CI, but I shouldn't have made that assumption.


As others have noted, npm install can/will change your lockfile as it installs, and one caveat for the clean-install command they provide is that it is SLOW, since it deletes the entire node_modules directory. Lots of people have complained but they have done nothing: https://github.com/npm/cli/issues/564

The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.


Is there no flag to opt out of this behavior? For Rust, Cargo commands will also do this by default, but they also have `--offline` for not checking online for new versions, `--locked` to require sticking with the exact version of the lockfile even when allowing downloading dependencies online (e.g. if you're building on a machine that's never downloaded dependencies before, so they aren't cached locally, but you still don't want to allow implicit updates), and `--frozen` (which is a shorthand for both `--locked` and `--offline`). I'm honestly on the fence about whether this is even sufficient, since I've worked at multiple places where the CI didn't actually run with `--locked` because whoever configured it didn't realize, and at least once a surprise update to the lockfile in CI ended up causing an issue that took a bit of time to debug before someone realized what was going on.


You’re right and the excerpt you quoted was poorly worded and confusing. A lockfile is designed to do exactly what you said.

The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.

That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!


You're correct


We should be displaying hashes in a color scheme determined by the hash (foreground/background colors for each character determined by a hash of the hash, salted by that character's index, adjusted to ensure sufficient contrast).

That way it's much harder to make one hash look like another.


As someone with red/green vision deficiency: if you do this, please don’t forget people like me are unable to distinguish many shades of colours, which would be very disadvantageous here!


It’s not like it would hurt you for there to be supplementary info others can see but you can’t.


I think 9dev was saying that providing only a colorized version might make it unreadable to some people, not merely that they wouldn't benefit from the extra color information.


And it's not like it would hurt the developers to be conscious of their choices.


There's actually nothing the developers can do about this particular issue other than to display all colors and allow colorblind people to see the colors that they can see.


It doesn't matter which colors the algorithm chooses so long as background/foreground are very distinguishable to as wide an audience as possible, and prev/next are likely to be distinguishable more often than not.

That's a lot of flexibility within which to do clever color math which accounts for the types of colorblindness according to their prevalence.


For the newly made up feature, which doesn't exist yet, but already has an issue?

Simple. Instead of forcing colour, one could retain a no colour option maybe?

Done. Solved.

Everything should have this option. I personally have no colour vision issues, other than I find colour annoying in any output. There's a lot who prefer this too.


Agreed, although I would argue that maximal hash contrast should be default, and if people find they prefer less, they can turn it down.

If you're the sort of person who would think about adjusting it to suit your sensitivity to this kind of attack, you're likely not the sort of person that the feature is trying to protect anyhow.


Team https://no-color.org/ for life

One will not be surprised to see that Chalk chooses its own path via the stunningly opaque FORCE_COLOR=0 and is all :fu: to people who suggest otherwise <https://github.com/chalk/chalk/issues/547#issuecomment-11268...> One will especially enjoy the "get bent" response because I discovered that one issue by, you know, searching the issues <https://github.com/chalk/chalk/issues?q=is%3Aissue%20NO_COLO...>


You could still ignore the colors and just read the characters, like people do now, and you could still use whatever color cues you are sensitive to.


Not sure why you're being downvoted, OpenSSH implemented randomart which gives you a little ascii "picture" of your key to make it easier for humans to validate. I have no idea if your scheme for producing keyart would work but it sounds like it would make a color "barcode".


I have to say the openssh random art has never really helped for me - I see each individual example so infrequently and there's so little detail to remember that it may as well just be a hash for all the memorability it doesn't add


If you ignored the characters and just focused on the background colors, yeah I suppose it would look like a barcode. But the way I envision it, each line on the barcode is a character, so it still copy/pastes into notepad as the original text, but it'll copy/paste into word as colored text with colored background.


Can you attribute this technique to a specific group?


A few years ago, I remember reading about some NFT contract attack that did something similar. So I'm sure it's out there now.


It's not a "group specific" technique.

This is smart, but not really unusual.


Almost certainly Lazarus


The phishing email comes across a bit too amateur. Specifically the inclusion of:

"we kindly ask that you complete this update your earliest convenience".

The email was included here: https://cdn.prod.website-files.com/642adcaf364024654c71df23/...

From this article: https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...


Very amateur. Who would fall that, really? I can only suspect npm people who are used to unprofessional repo hosting practices.

Such a Two Factor Authentication update request would have needed a blog post first, to announce such a fishy request.


That moment where you respect the hacker. Still we are encroaching on dark times.


> This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit ...

I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.

To be honest, this whole post has the ring of AI writing, not careful analysis.


> To be honest, this whole post has the ring of AI writing, not careful analysis.

No it doesn't?


> To be honest, this whole post has the ring of AI writing, not careful analysis.

It has been what, hours? since the discovery? Are you expecting them to spend time analysing it instead of announcing it?

Also, nearly everyone has AI editing content these days. It doesn’t mean it wasn’t written by a human.


Just for a counter, "nearly everyone" seems wildly ambitious.

I want no part of AI in any form of my communication, and I know many which espouse the same.

I will certainly agree on "many", but not "nearly everyone".


I've been thinking about using Levenshtein to make hexadecimal strings look more similar. Levenshtein might be useful for correcting typos, but not so when comparing hashes (specifically the start or end sections of it). Kinda odd.


It looks like a lot of packages of the author have been compromised (in total over 1 billion downloads). I've updated the title an added information to the blog post.


Update: It seems like all packages of the author got hacked.


The discrepancy comes from how npm packages are published. What you see on GitHub is whatever the maintainer pushed to the repo, but what actually gets published to the npm registry doesn’t have to match the GitHub source. A maintainer (or someone with access) can publish a tarball that includes additional or modified files, even if those changes never appear in the GitHub repo. That’s why the obfuscated code shows up when inspecting the package on npmjs.com.

As for the “0 downloads” count: npm’s stats are not real-time. There’s usually a delay before download numbers update, and in some cases the beta UI shows incomplete data. Our pipeline picked up the malicious version because npm install resolved to it based on semver rules, even before the download stats reflected it. Running the build locally reproduced the same issue, which is how we detected it without necessarily incrementing the public counter immediately.


It can also be that the repo was modified after a release.


I see, thanks for the explanations, and thanks for warning us about this!


Hey, I just found your LinkedIn Profile and noticed that we studied at the same university - even the same degree! I've send you a request on LinkedIn, so maybe we could connect and have a chat?

Have a great day!


offtopic, but it sounds you where trying to reach them as their car's extended warranty is about to expire ...


@Woeps: In case your joke gets downvoted into oblivion, FYI I found it very funny.


What a small world - let's!


Thanks for sharing this with us. This advice (change from being a shooting star to helping others to improve) seems to be in contrast to the article. Or am I missing something?


Thank god for this article. I'm a new tech lead with 1 person in my team. This person is even from a different country, new to the industry and not that advanced in his programming skills. I noticed everything this article describes. Although everything gets better day by day, week by week, it is still difficult and I sometimes even have the feeling "damn, do I really want to be a tech lead?"

Every tip is appreciated.


I have a couple pieces of advice.

There are hundred ways that you would have written any piece of code differently than your mentee. And if you correct all of them you will both be unhappy and unproductive. There are truly amazing coders who code differently than you or I. Focus on issues that materially affect the application(correctness, reliability, performance, security, reliability, maintainability - less important than the other) and give clear explanation of why you're making the critique and how it affects the application.

Also remember you're a team, and he wants the application to be awesome too. Approach critiques with "here's a way we can make the application faster" as opposed to "you wrote slow code".

Document every critique you make to create a code guidelines document. This will help anyone else who joins the teams, and will make your critiques feel less arbitrary, and help them remember and be able to lookup your critiques.

Make sure you have a standup every day to watch over their progress and code.

It's really easy to focus too much on getting your own work done and slack off as a tech lead or mentor. Always prioritize your mentee's work over your own. Don't start your work until you've set him up for success for the next few days. (code reviews, making sure he has work lined up and understands what he needs to accomplish and how he needs to accomplish it)

Maintain a positive attitude, remember to compliment them, err on the side of over complimenting than under complimenting.

Don't be afraid to ask for their advice or for them to do research.

Expect the quality to not be up to what it would be if you wrote it. Try to plan for this by increasing QA time, testing, and spending more time reviewing the parts of the applications where correctness is the most important. Also make sure to do performance testing in case he did something boneheaded.

Over communicate. Explain the why's of everything. There are 100 tech leads who under communicate for every tech lead who over-communicates. So chances are your team will run better if you communicate more.


This is great advice.

> Document every critique you make to create a code guidelines document.

It helps more to have as much of this as possible in lint and style checkers, so there's no arbitrariness at all.


Agreed, also saves a lot of time for everyone.


>Over communicate. Explain the why's of everything.

I would like to second this, if your schedule allows it. Attempting to make sense of a large codebase (that likely has much domain-specific and even historical reasoning within) when you are inexperienced and/or new to the domain can be a much better experience if the information flows as freely as possible from the expert to the new person.


i have worked with a junior multiple times.

a junior-senior combination works well with pair programming.

i suspect your problem is that you can't yet judge what your junior is capable of, so you assign him tasks and then find out he is struggling with a task, or doing it wrong, and then you have to spend time correcting or teaching.

in pair programming these activities go hand in hand. you tackle a problem together. at first you take the lead, but ask the junior how he would solve it. initially you drive (type the code) and he observes. once he has seen you code for a while, you can let him drive. as you work together you slowly learn about his abilities, and when tasks come up that you feel confident he can do by himself, then let him, while you bugger off to deal with your email.

depending on the nature of your work, you may need to spend some time doing managerial stuff that your junior can't help you with, or you have to attend meetings. (although for meetings about the code you write i'd take the junior along)

with only one person in your team i do hope though you get at least 50% of your time to code yourself, which you can use for pair programming. the other time your junior will spend on tasks on his own or learning something that you'll need next.


Junior-senior pair programming is different from managing one report, though the line is quite blurry. When I had a solo report it felt much more like a junior-senior pairing than being a manager (especially as it was their first eng job, and teaching them the basics was most of what I did).

Ultimately as a manager you're also evaluating their work and are in a position to decide things like if their employment continues, if they get promotions/raises, etc; that is what makes it distinct. Feedback in a junior-senior pairing is much more purely about helping them grow.

Dang though. 50% of your time to code... I'm an IC (as much as I want to be an EM) and that sounds really nice. I regularly have whole days where I don't get to code (in part because, as a potential future EM, I'm expected to mentor; I have 7 regular 1-1s, for instance).


how many people are you mentoring? obviously if it's more than one then you won't have as much time to code. i was thinking of the simple case where a team of two people is responsible for a single project. if one of them is the project manager, the other a coder, then the pm should not need all day to manage. unless of course they are at the kind of company that wastes everyones time with meetings and whatnot that prevents anyone from getting actual work done.


this is amazing advice. I figured this out the hard way as the manager just last week. Was initially apprehensive as I haven't done much pair programming before, but it was incredibly productive and my report loved it


"damn, do I really want to be a tech lead?"

not with a single report. the overhead is maddening. much better to have 2..3..4 where you can put them to plan/discuss/evolve solutions while you just update and evolve checklists. debrief daily, use their work. be consistent, write your own daily summaries for upstream. going from 1 to 4 should ~double the time you need while making you much more productive/deliver more comprehensive skills & results at the same time. make sure you get paid for keeping things on track while doing your usual! p.s./edit: four is a great size. 2x5 is the most i'd want to do. coordinate with hr but stay out of it.


OP here - glad that you enjoyed it, thanks for reading!

One thing I'll mention is that management is hard and largely a learned skill. I think of it like playing a sport. Some people are better than others due to innate ability, but everyone can improve with practice.

We've written / will write more on these sorts of topics if you're looking for more tips. Best of luck!


Interesting! Do you know of any resource where I can read more about this?


The tech is called Retpoline, and this blog has some pointers: https://www.blog.google/topics/google-cloud/protecting-our-g...


There was a few interesting bits in a post the other days about their new compute VM class, #4 in particular https://cloud.google.com/blog/products/compute/understanding... Its pretty light on details but was news to me.


This is not related to Spectre/Meltdown, which was over a year ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: