Hacker Newsnew | comments | show | ask | jobs | submit | login

While your points are valid and I personally don't like the inefficiencies, the challenge is that credit card companies are good at hiding these details from consumers.

1) Rules are in place so merchants can't markup products when using a credit card so the standard displayed price already includes the markup and cash paying customers are typically not rewarded.

2) An annoyance for customers, but it is mostly the credit card companies on the hook. The credit card companies know this but calculated the money lost to fraud is still outweighed by consumer convenience. Also, we are seeing things like Apple Pay starting to address the credit card number.

3) But there is an advantage to people who do know how to stay within a budget and not overspend. First, if money is stolen or their is a dispute, then the credit card issuer is on the hook and they have a strong incentive to resolve the issue quickly and fairly. When money is taken from your bank account as in a debit and there is a problem, you may have liquidity problems as you wait for this to be resolved. Liquidity problems can cause huge problems for people.

4) Yes, they are inefficient, but banks deal with this (mostly), generally not consumers (unless you are talking about ACH or wire transfers). (The exception is 'what credit cards do you accept here' if the answer isn't all of them.)

5) Yes, though since currencies fluctuate, it is kind of a messy problem and credit cards at least made it convenient.

Bonus) I'll also add that credit cards have both reward incentives and protection policies, which most people seem to forget about. (I usually have to remind people that if they used a credit card to purchase a computer and it broke just out of warranty, they probably have an extra year of extended warranty through their credit card.) And people seem to go crazy for frequent flier miles and such rewards, even though it may not be totally cost effective in the end.


> I'll also add that credit cards have both reward incentives and protection policies

My most-used card (American Express Blue Cash Preferred) gives me 6% back on groceries and 3% back on gas. It also has tons of purchase protection and insurance benefits.

So not only do I not care what merchants have to pay to accept my card, AmEx is paying me lots of money every year to use their card.

Those are some high hurdles to get over.


And those "groceries" might as well be Amazon gift cards purchased in a grocery store.


> Rules are in place so merchants can't markup products when using a credit card so the standard displayed price already includes the markup and cash paying customers are typically not rewarded.

This is no longer true. The Dodd-Frank Reform Act now allows merchants to charge a surcharge for using a credit card.


State laws still trump that.

Credit card surcharges are prohibited in California, Colorado, Connecticut, Florida, Kansas, Maine, Massachusetts, New York, Oklahoma and Texas.

Thus in those states, the markup will be added/absorbed into normal price and you will pay the credit card markup whether you use a credit card or not. (Some states may still allow explicit cash discounts, but very few merchants use that.)


One more thought on the "one currency" issue. At least in the US, legal tender (monopoly) laws punish anything used that isn't dollar based. You are required to accurately report capital gains you acquired from currency value fluctuations. This is an IRS/accounting nightmare for most people, hence multiple currencies are avoided by most people.


This is very much in spirit to Casey Muratori's (Handmade Hero) 'Compression Oriented Programming'


For all the people complaining about both (the lack of) generics and high performance systems level programming, this and his allies in the video game industry such as Mike Acton

(Data Oriented Design in C++) https://www.youtube.com/watch?v=rX0ItVEVjHc

and Jonathan Blow (Ideas about a new programming language for games) https://www.youtube.com/watch?v=TH9VCN6UkyQ

argue that performance and the focusing on the problem you are trying to solve are intimately related. Needless levels of abstraction (particularly ones that go against the hardware) and generically trying to solve for cases you will never actually use is a waste of real performance and developer time.


Once upon a time, there was a strong movement/belief that everything belonged in the web browser and operating systems should be no more than a shell to load a web browser, and in fact the web browser should be the one and only tool anybody should ever use, no exceptions.

Your thinking and the article's point shows that attitudes have dramatically shifted.


I always skim the comments first to make sure the link isn't just worthless click-bait. I don't like to reward manipulative and deceptive tactics.


Languages never really die. Microsoft frameworks tend to live well beyond what Microsoft wants (look at MFC). Enterprise in particular is invested in .NET (and Java and others) and won't be motivated to change quickly. However, also don't expect things to move in any one direction and there will continue to be lots of competing technologies.

That said, what is it that you want to do?

Video games, high performance, and real time applications, and emerging technologies tied to hardware (e.g. VR, wearables), you'll be better served going into lower level languages and having a much deeper understanding of how hardware works.

The web is dominated by JavaScript. Anything client side in the browser, and you are going to be better served there.

Application client side programing, every vendor has their own thing and if you want to do well, you'll want to learn the native stuff to some extent. (Apple/Cocoa, Android, Microsoft might be .NET but they keep changing what they push like C++CX/WinRT).

Server backends are harder to predict. Lots of competition here. Lots of trade offs from a electricity and heat and hardware cost standpoint, to being able to pull high level frameworks off the shelf to put something together easily, to social engineering challenges of what people in your organization want to do (e.g. standardize on one language for everything).

But ultimately from a career decision, true understanding of how things work and being able to solve new problems is more important than learning a framework. Being able to intelligently pick the best tools/tradeoffs and knowing when they are inappropriate will set you apart, otherwise you are just another religious zealot blindly pushing the same technology for everything no matter how appropriate or inappropriate.


OP here - Thanks "dottrap" - upvoted!

I appreciate your insight. Any thoughts specifically related to .net and future or lack there of?


My prediction: Server side, I expect .NET to look very much like today in say 10 years...still around with lots of people using it, but shrinking marketshare just because the market continues to grow larger (not necessarily because people are abandoning it) and new 'sexier' technologies will continue to emerge and compete in the space (e.g. look at Go and Rust) diluting percentages even more.

I don't expect cross-platform desktop client side .NET to significantly grow. Mono has tried this for 10 years. Sun Java tried this 10 years before that. Particularly where both are odd-men out on the 2 dominant platforms, iOS and Android (Android frameworks are very different than classic Java frameworks), I don't see them gaining much more penetration. And at least Apple has shown they aren't sitting still and they are actively trying to woo developers, hence the introduction of Swift. (And then ironically, Microsoft who invented .NET is currently all over the map about what developers should be using to develop for their platforms.)


Video games - Unity 3d though? C++ is for low level engines.


If you want a serious career in video game programming, AAA developers/managers will generally prefer candidates that know C/C++ and understand low level details and hardware (like explain why cache locality is the number one challenge in hitting your target framerate and what you can do about it).

Generally, in my experience, somebody with only C#/Unity experience will not be taken seriously for this kind of position unless they show they understand all the other stuff (which tends to be rare).

Somebody going for level and content designer jobs may get a job if they have Unity experience, however it is not because they know Unity, but simply that they show they can put something together. Most studios have their own custom in-house tools that they will need to train new hires on. (And historically, any direct scripting not in the custom tools is done in Lua.)


I think for casual games, MonoGame (XNA but for platforms besides Windows) is a very attractive proposition, since it's pretty easy to release a multiplatform game with.


This is sound advice.

I've been developing on Android since the 1.x days. Garbage collection has and continues to suck on Android.

People should not be so blind to think that better garbage collection is going to appear soon, and that a real world improved garbage collector will magically make all the GC related performance problems go away.

Google has demonstrated over and over that they are extremely slow to improve things in Android (see Java version support, see audio latency, see NDK, see Eclipse/Gradle transition, etc). And when they do finally improve things, it usually is still lacking.

Garbage collection in general is hard to get right. Even in better environments with highly optimized/tuned GCs, it still causes problems. It is unlikely Android is going to leap frog these.


This domain looks very specialized. For this domain which seems math centric and computationally expensive, I'm genuinely curious what pre-written libraries would be useful that would ultimately be a factor in language choice.


I was really excited when the very first iPhone launched because that one was essentially unsubsidized (or as close as us in the US had ever seen). Unfortunately, the mass consumers showed they wanted subsidized so the next generation of iPhone saw the plan cost go up and the upfront cost go down. Everybody I complained to thought I was crazy about the cost going up because all they looked at was the initial cost of the phone going down and failed to do the math at how much the increased plan cost would cost them over 2 years.


I honestly don't know where the "scam" line lies.


Whales apparently also lost taste buds.



    Ironically, from the code I've seen, 
    I don't think I've
    ever seen anyone actually misuse goto, 
    probably because the people who use it 
    actually know what they're doing. 
    That's just anecdotal on my part, but still.
This no longer is just anecdotal. An empirical study on this just got linked to on Slashdot the other day. http://developers.slashdot.org/story/15/02/12/1744207/empiri...



Applications are open for YC Summer 2015

Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact