
Accepting payments is getting harder - jaredtobin
https://medium.com/@folsen/accepting-payments-is-getting-harder-1b2f342e4ea
======
Animats
This is good. People are getting fed up with replacing their credit card every
six months because some online retailer had a breach. You can outsource
payment processing to Stripe, Paypal, Square, Yahoo Store, etc. There's no
reason every web merchant should see credit card numbers.

Stripe is in Visa's doghouse right now.[1] Their entry on the Visa Global
Registry of Service Providers has turned yellow, with an expiration date of
Mar 31, 2015. This means they're having some PCI compliance problem.[2] Visa
gradually cranks up penalties until the problem is fixed, or, after about 9
months, just pulls the plug. Visa says Square and PayPal are OK right now.
Yahoo is also in the yellow doghouse. (If you're a Stripe or a Yahoo Store
merchant, they were supposed to inform you that Visa put them in the doghouse,
so you can change vendors. Did they?)

[1]
[http://www.visa.com/splisting/searchGrsp.do](http://www.visa.com/splisting/searchGrsp.do)
[2] [http://usa.visa.com/download/merchants/Bulletin-
PCIENFORCE-1...](http://usa.visa.com/download/merchants/Bulletin-
PCIENFORCE-102214.pdf)

~~~
BlackFly
The real solution to the problem is use of an integrated circuit card, usually
through EMV.

If a web merchant uses 3D secure or Verified By Visa or SafeKey (from MC, Visa
and AmEx respectively), the issuing bank can implement the same level of
security in a web transaction that occurs in a card present chip transaction.
Proof that the transaction was originated by someone who has control over the
card, proof that the transaction was originated by someone who has knowledge
of the PIN.

In these schemes you can store the PAN all you want. As long as the 3DES key
is never read from the card, the PAN does you no good. Hopefully, when the USA
catches up to the rest of the world in this regard, PCI will relax security
requirements for merchants/acquirers.

~~~
nothrabannosir
This solution is already implemented in large parts of the world and good to
go! But the incentives are sometimes not right. Ultimately, I want IC payments
to be cheaper, as a merchant. I want my incoming IC payments to be in a
separate bookkeeping from the non-IC: increase in fees on the latter, I'd like
to keep my rates for the former.

Ultimately, I can then pass these savings on to the customers.

But as long as there is no incentive to ask for IC, why would I? It's just an
annoyance.

~~~
BlackFly
I'm a bit confused as to what you mean.

Physical merchants eliminate liability for fraud and get reduced interchange
fees by accepting chip.

Web merchants get reduced fees by using 3D secure (and the other scheme's
versions). It is the issuing banks decision whether the 3D secure uses a chip
or not, not the decision of the merchant. Many banks use sms push, RSA tokens,
OTP sent in an envelope, or just passwords.

------
borski
For what it's worth, PCI compliance as it stands today is complete BS. It
provides a false sense of security and most of the PCI ASVs are the scourge of
infosec. I can't tell you how many customers we have that use the cheapest
possible PCI ASV for "compliance," but then use us in addition for "real
security," despite the fact that we aren't an ASV. We've intentionally stayed
away from becoming one thus far, actually, because that is a whole political
game in itself. [1]

The new requirements are better. Stringent and hard to comply with, but
better.

The real solution, as I see it, is to build automated security testing into
your SDLC / Dev process. Penetration tests, when done by a good firm like
Matasano, are incredibly useful, but lose their value the next time you push
code. Building tools like Tinfoil into your CI process makes sure you don't
get owned between pen tests.

PCI is unfortunately written by political minds and lawyers, not engineers or
infosec folk. This is an unpopular comment, but is true in my estimation.
Comply because you must, but please don't treat it as the end-all be-all. Care
about your customers' data just a little bit more.

[1] [http://bretthard.in/2013/01/the-pci-asv-process-
sucks/](http://bretthard.in/2013/01/the-pci-asv-process-sucks/)

~~~
akshatpradhan
TinFoil, I remember I invited you to speak at the Boston Security Meetup
several years ago!

>The real solution, as I see it, is to build automated security testing into
your SDLC / Dev process. Penetration tests, when done by a good firm like
Matasano, are incredibly useful, but lose their value the next time you push
code. Building tools like Tinfoil into your CI process makes sure you don't
get owned between pen tests.

This is false because you're suggesting that security testing of your SDLC, a
subset of the compliance program is a more diligent solution than the entire
compliance program. I explain myself in a comment below and I'd be glad to
debate with you:
[https://news.ycombinator.com/item?id=9510369](https://news.ycombinator.com/item?id=9510369)

~~~
borski
Ha, good to see you here.

I'm not suggesting that there is absolutely no value to any of PCI. The fact
that it forces you to think about security at all is already of some value.
However, I am saying that passing a PCI audit is incredibly easy as compared
to thorough automated testing, and especially compared to a (good) manual
penetration test. Because you can pass a PCI audit relatively easily, people
will do that and think it's enough, when in reality there is far more they
should be doing to protect their customers.

Should you not do PCI? No, of course you should, as it's required by the
processors. Is PCI going to protect you from getting breached? No, it's not
enough, and you shouldn't pretend it is.

If you have to pick exactly /one/ thing to do in addition to (or instead of)
PCI, building thorough automated security testing into your SDLC process is
it.

~~~
akshatpradhan
>If you have to pick exactly /one/ thing to do in addition to (or instead of)
PCI, building thorough automated security testing into your SDLC process is
it.

I don't understand how SDLC secure testing is an addition to PCI when its
really a sub requirement of PCI (Req 6 which addresses SDLC and secure code
testing)

I'm going to rephrase because I'm still confused: You're saying that in
addition to doing PCI, I should do a sub requirement of PCI.

Why does that sound like circular logic to me?

~~~
borski
Show me where it suggests automated testing as a regular part of your SDLC. It
recommends applying patches, coding to secure guidelines / best practices,
doing code reviews, and running an automated or manual pen test at least once
a year. Nowhere does it state in Requirement 6 that you should build automated
testing into your SDLC.

Reference:
[https://www.pcisecuritystandards.org/documents/pci_dss_v2.pd...](https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf)

~~~
akshatpradhan
Just so we're clear, what do you define as automated testing? Let me know.

~~~
borski
Scheduled testing that happens on an automatic basis, or that occurs whenever
code is deployed or committed. For example, whenever you would run your unit
tests or integration tests, you should also run security tests.

It's the difference between doing an automated or manual penetration test
every 12 months and testing your application for vulnerabilities with every
deploy.

~~~
akshatpradhan
It's 6.5.1-6.5.10 of PCI v3.1. That's all security testing. You can use a
static analyzer like Brakeman to speed things along instead of doing it
manually.

~~~
borski
Those requirements [1] state the following:

 Train developers in secure coding techniques, including how to avoid common
coding vulnerabilities, and understanding how sensitive data is handled in
memory.

 Develop applications based on secure coding guidelines.

It says nothing about automated testing, which is precisely my point. The
requirement is that you attempt to follow security best practices and train
your employees well. My point is that even the best-trained employees make
mistakes.

The reason unit tests exist is to make sure you didn't accidentally break
stuff when you write code. I'm arguing that automated security tests should
exist for the same reason, and that's what we're trying to offer and build.

Incidentally, section 6.6 does state that you should use manual or automated
testing at least annually; PCI 3 added a new clause which states 'after any
changes.' Also, it explicitly specifies that you need /either/ automated or
manual testing, /or/ a WAF. WAFs, as you're likely well aware, miss things
very often. [2] WAFs are a good stopgap, but should not be relied upon to
provide actual security; they should only be relied upon to attempt to prevent
an attack while you are in the process of fixing the vulnerabilities
underneath.

Because penetration testing is so expensive, typically, I suspect it will be
more common for people to go with the WAF than to do a pentest with every
deploy. I don't have stats to back that up, since PCI 3.1 hasn't actually been
enforced yet, but I strongly suspect that will be true.

[1]
[https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1....](https://www.pcisecuritystandards.org/documents/PCI_DSS_v3-1.pdf)

[2] [http://www.slideshare.net/zeroscience/cloudflare-vs-
incapsul...](http://www.slideshare.net/zeroscience/cloudflare-vs-incapsula-vs-
modsecurity)

------
tptacek
Good. There's no love lost between me and companies that make significant
money doing PCI assessments (they tend to be the bottom-feeding remora of the
infosec economy), but the one criticism you could not level against the PCI
certification program over the last 10 years is that it was _too hard to get
certified_.

Use Stripe. Move on.

~~~
StavrosK
What happens when the iframe allowance is removed, and not even using Stripe
can save you from the credit card companies? This seems like a transparent
plan to make PCI assessors more money.

~~~
kijin
Judging from how proactive Stripe is being with respect to recent changes,
they probably have a Plan B in case the iframe exception disappears. For
example, they could provide a payment page hosted with them that you can
customize (basically an enhanced version of Stripe Checkout), or even offer to
iframe your webpage the other way around.

~~~
StavrosK
Yep, knowing Stripe, I'm not _that_ worried that they won't be able to figure
something out. What does worry me is that they _need to_ figure something out
at all.

------
sjtgraham
> The worst offenders however are the requirements that some businesses simply
> cannot comply with unless they have some serious cash laying around.
> Examples of this are

>> Quarterly external vulnerability scans must be performed by an Approved
Scanning Vendor (ASV), approved by the Payment Card Industry Security
Standards Council (PCI SSC).

> and

>> Is external penetration testing performed per the defined methodology, at
least annually, and after any significant infrastructure or application
changes to the environment (such as an operating system upgrade, a sub-network
added to the environment, or an added web server)?

Have you ever had a penetration test done? They basically run a load of OSS
automated tools, generate a PDF report, and then charge you $1000s. It gives
you no real insight and reveals nothing unless you've been a total noob. Why
is this so expensive?

Broadening of PCI scope + needlessly expensive compliance = Smells like a
large opportunity.

~~~
dsacco
I'm sorry that's been your experience. What _should_ have happened is a
primarily manual penetration test, administered by security engineers who
themselves used to be fully competent software developers. Any automated
tooling should have had strict manual verification and should not have been
the focus of the test. Furthermore, superfluous results should not have been
submitted in the PDF report.

Strictly "policy" audits such as PCI compliance differ a bit, but in general
they should still involve a technical deep dive into your product's
infrastructure, conducted by consultants with expertise in multiple tech
stacks and overall experience in a variety of frontend and backend frameworks.

The final deliverable ("PDF report") also should have been hand-written, and
in language that conveys technical expertise, complete with recommended steps
towards remediation of any issues.

My employer, Accuvant does this, as well as Matasano (more well known here on
HN).

As for why it's so expensive...well, I bill out at about $2000 per day. It
really comes down to a lot of what people like patio11 and tptacek like to
talk about here regarding consulting:

1\. This is highly specialized work, with a much smaller population of
competent engineers than typical web developers (for example). As such, it
naturally receives a higher fee for supply and demand. Now, some people abuse
this and run scans like Nessus and call it a day. These are not real infosec
firms, they are parasites.

2\. More specifically, we ask for it and we receive it, and we do exceedingly
well. If people keep paying us five figures a week to perform a penetration
test, we're not going to stop asking for it or reduce our prices.

~~~
hueving
>These are not real infosec firms, they are parasites.

The entire consulting penetration testing market is setup to encourage this
behavior. There is no way to prove you actually did anything correct. Someone
can write a wonderful PDF analysis by hand and still leave the system full of
glaring holes. Customers can't tell a system is broken until it gets hacked.

>More specifically, we ask for it and we receive it, and we do exceedingly
well. If people keep paying us five figures a week to perform a penetration
test, we're not going to stop asking for it or reduce our prices.

Right, but many times I've seen companies do it because they are desperate to
do it for compliance purposes. :/ Essentially there is a non-trivial portion
of the market held up by regulatory demand.

>Strictly "policy" audits such as PCI compliance differ a bit, but in general
they should still involve a technical deep dive into your product's
infrastructure, conducted by consultants with expertise in multiple tech
stacks and overall experience in a variety of frontend and backend frameworks.

I'm curious. Do you review every line of code in a customer's codebase? What
about the code of every library they import? If you don't review imports, do
you leave a big caveat in your report that says their code looks okay, but the
libraries could be full of vulnerabilities?

~~~
raesene4
Whilst I'm not the parent commenter I do work in the same industry..

>>These are not real infosec firms, they are parasites.

>The entire consulting penetration testing market is setup to encourage this
behavior. There is no way to prove you actually did anything correct. Someone
can write a wonderful PDF analysis by hand and still leave the system full of
glaring holes. Customers can't tell a system is broken until it gets hacked.

So a good company should be able to provide a methodology detailing the tests
they do, you'll also see some who report the tests conducts and the results
(positive or negative), so asking for sample reports from consultancies would
help to find one closer to your specific needs. Personally I prefer reporting
all test results as it keeps both parties straight on what has and has not
been covered.

>>More specifically, we ask for it and we receive it, and we do exceedingly
well. If people keep paying us five figures a week to perform a penetration
test, we're not going to stop asking for it or reduce our prices.

>Right, but many times I've seen companies do it because they are desperate to
do it for compliance purposes. :/ Essentially there is a non-trivial portion
of the market held up by regulatory demand.

Yeah where people are getting tests for purely compliance reasons there is a
tendency to go with cheap suppliers as there's not really good perceived
benefit.

>>Strictly "policy" audits such as PCI compliance differ a bit, but in general
they should still involve a technical deep dive into your product's
infrastructure, conducted by consultants with expertise in multiple tech
stacks and overall experience in a variety of frontend and backend frameworks.

>I'm curious. Do you review every line of code in a customer's codebase? What
about the code of every library they import? If you don't review imports, do
you leave a big caveat in your report that says their code looks okay, but the
libraries could be full of vulnerabilities?

Heh this is one of the huge gaping holes in security at the moment. Most
applications are now constructed of piles of code acquired from repos (npm,
nuget, rubygems etc) that provide absolutely no curation of content and anyone
can put any code they like up there. There is (from what I've seen) very
little appetite from companies to actually try and audit their whole stack,
generally due to the cost of doing that. Manual code review is expensive and
when you start importing 100Kloc of 3rd party code into your solution it would
not be a cheap excercise to validate...

------
cm2187
We should have moved a long time ago to vendor specific credit card numbers
(ecommerce isn't exactly a new activity). Say I get from my bank a token which
I provide to this vendor, and the first time the vendor uses it to accept a
payment, the token locks in to that vendor, i.e. my bank will not allow any
payment with this token to another vendor (i.e. to another bank account). Then
it doesn't matter if it's stolen, only that vendor can use it anyway. And I
could have the option to tell my bank to make it a single use token, or to
cancel a multiple use token or to set a payment cap to that token.

That doesn't seem very complex to implement and would alleviate the vast
majority of the credit card related problems. I am sure it can be made
simpler, have a protocol with redirects to the bank's website that eliminate
the need for the client to copy-paste a token, or to have another mechanism
with similar effects.

Banks are one of these many industries that don't seem to get technology. They
employ massive IT and developer staff but are run by people who don't get it
(and to make things worse, are most of the time massive bureaucracies which
means that even when they know what they need to do they just can't execute).

~~~
fragmede
Bank of America has ShopSafe which allows you to generate a temporary credit
card number to use with the sketchy online merchant that has the particular
gadget I want to buy.

Their implementation leaves much to be desired, but it's a step in the right
direction.

~~~
twothamendment
Discover card has gone back and forth on this. They had a tool on their site
to give you a throwaway CC #. I used it for almost all online purchases. It
went away for a short time, then came back. Now it looks like it is gone for
good. I quit using their card since that was the only reason I had to use it
over others.

Now if they'd only stop sending me "checks" in the mail that are tied to my
account... I'm just waiting for those to be spent by someone else.

------
dangrossman
I always recommend people build their payments on Spreedly
([https://spreedly.com/](https://spreedly.com/)).

It checks off the boxes for minimizing PCI scope; you store no payment
information, and collect none on your website either. You can either do a
transparent redirect (your payment form points to a URL on their domain, which
redirects back to your site with a token) or an iframe.

Once you collect payment information, which they tokenize and store, you can
run charges/auths/refunds against it using any of 81 different payment
processors and gateways. Balanced one day, Stripe the next, and whatever
startup is popular after them in a year -- without changing any of your
payment code.

~~~
aethr
Does anyone know if using transparent redirects actually waives your
responsibilities for PCI compliance? Even though the credit card details
aren't sent to your backend, they are still collected on a form hosted on your
infrastructure.

If your servers are compromised and malicious JS is added to your payment
form, couldn't an attacker siphon credit card details via AJAX? It seems like
the PCI documentation always uses terminology like "sites that collect credit
card data", which I think sounds broad enough to include sites that use
transparent redirects.

~~~
mirashii
It's generally transparent redirects in the other direction. You load JS from
their servers, the form posts to their servers, redirects back to yours. Often
even the form isn't hosted on your infrastructure, but instead embedded into
your page through some JS that is hosted elsewhere.

If your responsibilities weren't waived or significantly lessened, I'd imagine
companies like Stripe would be significantly less successful.

~~~
aethr
You generally don't need to load any JS from the payment processor to use a
transparent redirect, although I'm sure some work this way. Transparent
redirects just require setting the form action property on your payment page
to a URL on the processor's site. The processor then silently redirects back
to the next step in your process.

Using an embedded payment form also shouldn't require any JS, as it is usually
done using an iframe. This method should be safe, as the same-origin policy in
the iframe would prevent JS on your domain from interacting with form elements
in the iframe. But this is not typically what people are talking about when
they talk about a transparent redirect.

------
jusben1369
This is a great article. I would add a couple of data points for context:

\- Visa won't come after you. Your merchant account provider is on the hook.
They let you process cards so they need to ensure you're PCI compliant. That's
how the flow works.

\- PCI 3.0 kicked in in January. People reassess annually. So if you
reassessed last year under 2.0 standards you're still good until your renewal
comes due. That's why this is slowly creeping across the payments space in
terms of realization.

\- The card networks saw longer, sustained ongoing fraud happening in online
commerce from .js or transparent redirects than they did from hosted payment
pages. So the big change in PCI 2.0 to 3.0 was this idea of wanting to make it
harder to completely build your own custom payment pages vs using a hosted
payment page. HPP's are SAQ-A and customized payments pages - A EP

\- iFrame's and checkouts are really trying to be the best of both worlds.
That's why they're currently treated as SAQ-A. There was definitely a lot of
thrashing around how they would be treated when the 3.0 specs were being
drafted and published.

Again, I really enjoyed the article and appreciated Spreedly being included as
a reference. I would agree with the major premise that in general merchants
are unaware that the way they implemented their payments pages now mean
they're in greater scope and that the providers aren't doing a good job
educating them. It's an open secret in the industry that many payment gateways
add a (pure margin) fine for $20 to $50 per month onto your account if you
don't have valid certification. In a low margin business that reduces
motivation to push small and medium merchants to ensure they're PCI compliant.

------
dperfect
Stripe plans to use the iFrame "loophole" to enable Stripe.js customers to
qualify under SAQ A-EP as mentioned on their website[1]:

> The new version of Stripe.js meets these criteria by performing all
> transmission of sensitive cardholder data within an iframe served off of a
> stripe.com domain controlled by Stripe.

Can someone help me understand how this is practically any more secure than
the way Stripe.js currently works? It sounds like the intention of the iFrame
exception[2] is to allow a payment form to be loaded, completed, and submitted
to a compliant server all within a _visible_ iFrame. From what I can tell, the
"compliant" version of Stripe.js just submits the data (similar to the
traditional way) via an invisible iFrame - the form is still hosted and
completed on the (likely) non-compliant server.

If that's the case, then I'd expect the "loophole" to go away soon and current
Stripe.js users will have to adopt a payment flow similar to Stripe Checkout;
in other words, it will be obvious that Stripe (a third party) is being used
because the end user will be interacting with a form (or part of a form)
completely hosted on Stripe's servers.

For companies using Stripe to avoid PCI compliance with self-hosted payment
forms, this essentially transforms Stripe into another PayPal checkout-style
service.

[1] [https://support.stripe.com/questions/what-about-pci-
dss-3-0](https://support.stripe.com/questions/what-about-pci-dss-3-0)

[2]
[https://www.pcisecuritystandards.org/documents/Understanding...](https://www.pcisecuritystandards.org/documents/Understanding_SAQs_PCI_DSS_v3.pdf)

~~~
matthewarkin
I'm actually working on a blog post about this, basically the argument is
whether or not you use Stripe.js and the invisible iframe and Stripe Checkout
is that as soon as you have some malicious JS in your DOM all bets are off,
and while stealing credit card info from Stripe Checkout may be harder than
just doing $("#credit-card-number").value, its not /that/ much harder.

(As part of my blog post, I actually use some malicious js on the merchant
site to steal card info from a Braintree iframe (the drop in))

~~~
aethr
Very interested to see your blog post. I was under the impression that if the
data is collected in an iframe with a same-origin policy, that malicious JS in
the containing page wouldn't have access to form elements (or anything) inside
the iframe.

Of course if you have malicious JS in your DOM, there's nothing stopping it
from rendering it's own legit-looking credit card form that simply passes data
off to an external URL.

~~~
matthewarkin
Thats basically the concept, once you have malicious js you can replace the
iframe with a malicious one that looks the same. You can even have it still
create a legitimate card token, so in theory the website would never know they
are hacked. The other PCI SAQ A scenario is linking off site. So while
malicious JS could change the link you redirect to customers to it would be
noticed because the customer may see a sketchy url and the merchant would see
a decrease in sales.

~~~
chinathrow
So where's the news then? Malicious JS is known to be bad.

Would the next step be to turn of JS at all? Or go back to hosted cc forms
only?

~~~
matthewarkin
Historically, the thought has been that the iframe and a redirect could both
be treated as SAQ A (the easiest form of compliance), was because if you
changed the iframe that was displayed or the page the customer was linked to
it would be extremely difficult to steal customer information in a silent way.

So if a merchant links to paypal.com/merchant, and I inject js to change it to
paypal.com/matthewarkin. The merchant would immediately know something was
wrong because they are no longer receiving money. The issue with how Stripe,
Braintree, and others have implemented their javascript and iframe
implementations is that is pretty easy to replace the iframe with a malicious
url (paypal.com/matthewarkin) but still allow the merchant to receive their
funds.

A simple fix for this would be the api keys used to instantiate the iframe
only be usable from the iframe and could not be used to call the create token
api directly.

~~~
jtdowney
At Braintree, we have been working on the approach you mentioned. We’ll soon
update our iframe products to allow a merchant to opt-in to only ever
receiving cardholder data via the Braintree iframe. With this change, we could
actively block malicious JavaScript from rewriting the merchant form by
rejecting data not from the Braintree iframe. Things like this aren't a
panacea though which is why it’s important for merchants to use technologies
like Content Security Policy and leverage as much of the browser security
model as possible.

~~~
matthewarkin
I think more awesome, was the hosted fields you just launched, so that I can
have a custom, stylized form where each credit card input is its own iframe.

[https://www.braintreepayments.com/features/hosted-
fields](https://www.braintreepayments.com/features/hosted-fields)

~~~
jtdowney
I agree! I submitted it as separate item because this conversation was about
rewriting iframes. Although hosted fields doesn't directly address the
rewriting for now, we're looking at it closely.

------
netcan
This whole area is a big ole platform problem.

Credit cards are a bad platform to build on. The duopoly structure is a bad
platform for gradual improvement and the regulatory environment is a bad
platform for innovation.

We have deeply entrenched kick-it-forward allocation of responsibility and
fixes to serious problems are characterized by firefighting, designed-by-
committee compliance, cover-your-assness and such. All the hallmarks of a
poorly functioning market, poorly functioning organization and general
pathologies that occur whenever the way we organize is wrong.

Leaving bitcoin aside,^ I think the fundamental problem is having CCs play the
role they do. Instead of customers sending merchants money, merchants request
money from CC companies. That is a bad system.

^The reason bitcoin is difficult to insert into the conversation is because it
has so many big hairy goals. Government power over money. Macroeconomic
theories of monetary policies baked in… Its a big interesting project, but the
problem discussed here is only really a subset of what bitcoin is about so
it's kind of a tangent.

------
Silhouette
As a small business taking card payments, we're far more concerned about the
absurd rate of failure of perfectly legitimate charges than anything PCI-DSS
say. We lose more customers to Stripe charges failing than any other reason,
by a considerable margin, and it seems even Stripe don't actually know why
because it's organisations further down the line making these decisions.

The whole card payments industry is broken, and the sooner the growing direct
payments industry kills off the credit card giants, the better.

------
will_brown
One of my side projects is a membership/subscription model Primary Care
medical practice, and uses a third-party payment processor and we were
recently audited by one of the large payment card issuers.

There was a finding that the third-party processor - which we specifically
choose because many of their clients were major gyms with similar monthly
membership models - was improperly processing our members payments. If I
recall correctly, there is one standard for one time payments and a different
standard to be used for recurring payments. A subscription model like ours
allows our subscribers to use either, but the third-party processor used the
one-time payment standard to process both one-time payments and monthly
recurring payments. Even though recurring payments was a major selling point
of the processor, when it came down to it, they were not even aware of the
distinction and aware of the separate standard. We were actually quiet
fortunate in that we had original signature agreements for each and every
instance of a member who agreed to the recurring automated payment, but as I
recall without those agreements there may have been some kind of repercussion.

Anyway it is a cautionary tale that just because you use a third-party, even a
reputable one that serves national franchises, does not necessarily mean they
know what they are doing.

------
chrisfosterelli
This is a fantastic example of why companies like Stripe are going to come out
ahead.

Other companies are basically telling you "deal with it", while Stripe is
giving you documentation and rewriting their software for it.

~~~
amelius
Well, it is not so simple because if the payment is required to run on a
separate page, then Stripe's api has to change, I suppose.

------
JimmyL
The biggest thing to remember when dealing with PCI DSS is that it's not the
law.

Your PCI obligations come out of a commercial agreement that you have with
your processor, which comes out of commercial agreements they have with
VISA/MC/et al. That's not to say that it's not a well-defined standard that
you're going to end up having to follow in some way, but rather that
statements like "Both Litle and Recurly flat out say that you need SAQ A-EP"
have more wiggle-room than it would sound like, depending on the rest of the
deal you're presenting them with.

If you're a Level 3, I'd argue the goal should be to keep yourself on SAQ-A -
the methods of which are pretty well-understood now. Pick a vendor which has a
tokenization service designed to be hit from JS (they all work the same way at
their core - download JS which contains an implementation of RSA and a public
key, browser-side encrypt the CHD using that, send it off, get back a token).
Put your payment form inside an iFrame which is served from a PCI-compliant
host (like S3). Once tokenization is complete, send the token from inside the
form back out to the containing page using postMessage or in the querystring.

Do all that, and you're fine to stay on SAQ-A
([https://www.pcisecuritystandards.org/documents/Understanding...](https://www.pcisecuritystandards.org/documents/Understanding_SAQs_PCI_DSS_v3.pdf)):

 _Examples of e-commerce implementations addressed by SAQ A
include...[merchant] website provides an inline frame (iFrame) to a PCI DSS
compliant third-party processor facilitating the payment process...Examples of
e-commerce implementations addressed by SAQ A-EP include...[merchant] website
creates the payment form, and the payment data is delivered directly to the
payment processor (often referred to as 'Direct Post')_

Will they change PCI DSS again to remove the iFrame rules? Maybe, but given
the speed the PCI council moves at (and the warnings they give before changing
things), I'd deal with it then.

Lastly, if you're thinking of building a service which white-labels credit
card processing and sells that processing as a service which your customers
can then resell...don't forget about PCI PA-DSS

------
jackreichert
Unless you're in the business of money, you shouldn't really be doing it
yourself, so the important TLDR is:

> Obviously it’s not fun living in that fear, which is why I believe that
> services like Stripe, that help you out with these issues, will thrive.

------
gfosco
Accepting payments is getting easier, because companies like Stripe will
handle the burden of security and compliance.

~~~
ique
Definitely easier "on the whole" compared to 10 years ago, but harder compared
to last year.

------
sageabilly
This move makes sense if you look at PCI's board of advisors[1]- It's a bunch
of bank VPs plus the head of security for both First Data and Pay Pal. The
people who run PCI compliance are the ones that stand to lose if PCI
compliance becomes moot, so they are doing all they can to make it seem like
it's the be-all end-all of internet security and that you'd be a fool to trust
an online merchant that wasn't PCI compliant.

Interesting point about making vendor specific security tokens for internet
transactions in an earlier comment. That would quite obviously help
tremendously in the case of a breach, however that would put the onus on banks
to be responsible for security instead of on merchants, and again the bank
representatives on the BOA at PCI aren't going to go for that.

[1]
[https://www.pcisecuritystandards.org/organization_info/board...](https://www.pcisecuritystandards.org/organization_info/board-
of-advisors.php)

------
jasonisalive
The problem of credit cards is that when you make a payment, you have to give
away your private key. No amount of securitisation will take away this
fundamental flaw.

This is one of Bitcoin's evolutionary advantages in this space. To send money
with Bitcoin, there is no need to expose one's private key. A massive
corporation could take millions of annual payments and their paying customers
needn't be concerned about their money being at risk. If the entity has poor
security, the only people they endanger are themselves.

------
joshwa
I have to have a new pen test every time I "add a web server"?

~~~
akshatpradhan
You have to pentest only 1) if the web server touches sensitive data and 2) if
the sensitive web server being deployed is configured differently from all the
other sensitive web servers in that same sensitive environment.

If you're just adding web96 and its configured exactly like web01 through
web95, then it doesn't need pen testing.

------
jpatokal
Every time legacy payment processors ratchet up compliance requirements,
cryptocurrencies get another little boost. And while it's easy to forget in
the US, where credit cards are handed out in boxes of cereal, getting a credit
card is an insuperable hurdle of much of the world's population, eg. the
estimated 75% of Indians who work in the informal economy and thus can't prove
(or don't have) regular income.

------
akshatpradhan
This is great that HN is talking about PCI. The problem with PCI and
compliance in general is three fold.

1\. People don't want compliance [1]

2\. People think compliance is broadly applied

3\. People don't know where to start

I'll answer these point by point.

1) The reason people don't want compliance is because the security industry
claims that Compliance is bare minimum and not enough. If they told you that
Compliance was simply doing your due diligence on your sensitive devices, I
think the market would have had a software tool to easily get us through the
process by now. (I built a "brilliant" PCI tool btw. Link below)

So let me explain to them why I think Compliance is just doing your due
diligence and we'll do that by simply asking this question: If compliance is
bare minimum and not enough, what is a comprehensive approach available right
now to reasonably protect our sensitive data? The security professionals will
tell you Risk Assessments and Pentesting often is the best alternative [2]

Their answer is to specifically switch to Risk Assessment and PenTesting
often, which is Requirement 11 and Requirement 12 of PCI. Each one of the
bullets written is specifically covered by PCI DSS 3.1, including social
engineering/phishing attacks that are provided through security awareness
training. They're telling me that compliance is bare minimum, yet their
suggestion is to do a subset of compliance. Its circular logic. Since its
circular logic and nobody has been able to provide me with a reasonable
approachable alternative to going above bare minimum, I hypothesize that
compliance is NOT bare minimum, but in fact, due diligence.

Think of a fort. Forts had defined compliance checklists in the old times. In
a fort, you go through a security rotation of making sure the pot of boiling
oil tips over on time. You practice your smoke signaling so that the
appropriate people are notified in the event of a wall breach. Were they
spending a majority of their security drills taking half their army, launching
it against the fort, fixing what fails, and then doing a risk assessment?

2) A compliance program by definition is only applicable to your sensitive
environment. It cannot be applied broadly. Its forcing you to go through the
decision making process of asking yourself, "What's most important to protect
right now". Only you can classify the sensitivity of your devices. Only you
can choose if your code base is sensitive or your employee's SSN is sensitive.
But whatever you’ve classified as sensitive must fall under compliance. Let's
refresh: Compliance is designed to be applied only to your sensitive data. If
you choose to put a non-sensitive devices under a compliance program, then
you've specifically applied compliance broadly.

3) To approach any compliance program, ask yourself 6 questions on any
specific device.

1\. Does this device store, process or transmit sensitive data (e.g.
cardholder/health/SSN)?

2\. Is there unrestricted access between this device and a sensitive device?

3\. Does this device provide authorization, authentication, or access control
to a sensitive device?

4\. Can this device initiate a connection to a sensitive device?

5\. Can a sensitive device initiate a connection to this device?

6\. Can this device administer a sensitive device?

If you're able to answer yes to any of those questions then your device is
sensitive and due diligence is required. Let's go back to the fort example. Is
there unrestricted access from that boiling pot to the sensitive gold chamber?
Does the pot provide access control to the sensitive gold chamber? If yes,
then configuration settings of the pot, pulley, oil, need to be recorded and
monitored periodically. If no, then its possible that you might have a
business justification for not putting as much rigor into recording and
monitoring the correct functionality of that pot.

You've already started the compliance process by asking yourself, "Does my
laptop initiate an outbound connection to a sensitive device?" If yes, then
your laptop falls under compliance and due diligence is required.

Everything else is just record keeping. Create a network diagram of only your
sensitive assets. Write down how you rotate your keys on those sensitive
assets. Write down what log files you periodically review. Go through a
practice run of your Incident Response in case there is a breach of your
sensitive asset.

This is a pretty long post and I hope it helped.

Shameless plug, I'm building a tool called ComplianceChimp to guide you
through this entire process including recording your procedures with Github
flavored Markdown. You can see how I'm using our tool to get us through the
PCI Compliance process. [3]

[1]
[https://news.ycombinator.com/item?id=9510435](https://news.ycombinator.com/item?id=9510435)

[2]
[https://gist.github.com/akshatpradhan/1573e5f6c1872b6af129](https://gist.github.com/akshatpradhan/1573e5f6c1872b6af129)

[3] [http://cc-
stg2.herokuapp.com/compliancechimp/documents/softw...](http://cc-
stg2.herokuapp.com/compliancechimp/documents/software-development-process-
sdlc)

------
mcdougle
What service might I use if I needed to fulfill the following requirements:

1) Say I already have a customer -- he already paid a signup fee and we charge
him monthly, so he's already put in his credit card information. At a later
date, we need to charge him for something other than his monthly subscription
fee. This is something he can do himself by logging in, but also something the
site administrator needs to be able to do by selecting his account and
clicking a button. In this case, we do not want the user to have to re-enter
his credit card information; we want this part to be seamless. Is there a
payment API that can do this -- random one-off charges against an existing
account without the user having to sign into the third-party service himself?

2) Is there a service that charges bank accounts directly -- as in they enter
their bank account number instead of a credit card number? Other than PayPal
--they seem to be the only one that does this.

~~~
benmanns
For 2), it looks like Balanced payments accepts ACH debits.

[https://www.balancedpayments.com/ach-
debits](https://www.balancedpayments.com/ach-debits)

~~~
jamiesonbecker
Apparently not any more. Balanced shutting down (RIP)

[https://www.balancedpayments.com/stripe/faq](https://www.balancedpayments.com/stripe/faq)

~~~
mcdougle
I emailed them with a question and that's the response I got. Too bad, there
seems to be a need out there for an automated bank account payment API.

------
mrweasel
It's only getting harder if you have "inline" payment. Honestly I'm glad to
see that go away.

We always used "hosted payment page" solution, it's safer, and we've been
expecting tougher rules for some time.

If you want to talk about online payment becoming harder, you should address
the increasing number of payment options that online stores need to support.
Adding to debit and credit cards are bank transfers (which is different in
every country), PayPal, invoicing, part-payments, financing, and an almost
unlimited number of local option.

------
ukigumo
Well, I guess it won't hurt if I offer my services for PCI guidance for
startups here :-)

One thing to keep in mind is that PCI is a bare-minimum of security "best
practices" that aims at validating that a company transacting with payment
cards has an understanding of data classification and protection.

~~~
akshatpradhan
If compliance is bare minimum and not enough, what is a comprehensive approach
available right now to reasonably protect our sensitive data? The security
professionals will tell you Risk Assessments and Pentesting often is the best
alternative [1]

Their answer is to specifically switch to Risk Assessment and PenTesting
often, which is Requirement 11 and Requirement 12 of PCI. Each one of the
bullets written is specifically covered by PCI DSS 3.1, including social
engineering/phishing attacks that are provided through security awareness
training. They're telling me that compliance is bare minimum, yet their
suggestion is to do a subset of compliance. Its circular logic. Since its
circular logic and nobody has been able to provide me with a reasonable
approachable alternative to going above bare minimum, I claim that compliance
is NOT bare minimum, but in fact, due diligence.

Think of a fort. Forts had defined compliance checklists in the old times. In
a fort, you go through a security rotation of making sure the pot of boiling
oil tips over on time. You practice your smoke signaling so that the
appropriate people are notified in the event of a wall breach. Were they
spending a majority of their security drills taking half their army, launching
it against the fort, fixing what fails, and then doing a risk assessment?

[1]
[https://gist.github.com/akshatpradhan/1573e5f6c1872b6af129](https://gist.github.com/akshatpradhan/1573e5f6c1872b6af129)

~~~
ukigumo
The most comprehensive approach is to have an InfoSec policy portfolio which
permeates into every corner of your organisation and dictates secure operating
behaviours and mandates logical and physical security practices. This will
include regular vulnerability scans on your code, your application stack and
your infrastructure but it will also include instructions on how to classify
data and how to handle data according to that classification.

Compliance is a achieved by marking a checklist which is why is fairly easy to
botch it up. Sure you can do a subset of the checklist and have compensating
controls for everything you've missed but the risk of non-compliance is not
being able to do business (at best) and jail time (at worst) so you tell me
what is your motivation to fail to meet the bare minimums of security best
practices in card payment industry, aka, PCI-DSS.

Think of a castle; It will have several walls, towers, heavy doors, guards
etc. It will also be placed in a hill, a mount or otherwise hard to access
area (never in a vale for instance). It will also have the largest possible
distance between the treasure hall and the front door. The threats your castle
faces will continuously evolve, and the walls that stood up against bows and
arrows are useless against turrets or cannons, so if you want to keep your
treasure you do your best to be one step ahead and you don't get that by
making sure your original walls are still in place or any other base
requirements are still met.

------
dbreaker
nice recap

