The algorithm for checking answers in journals is to send it to a panel of other people in the field, ask them if it checks out (obvious errors, overlooked things, etc.), and publish if it does. This model has already broken down when people outright fabricate results, it has missed blatant plagarism, and overall is largely dependent on all the parties being honest.
It wouldn't work at Coursera's scale and for a price point they can afford.
Coursera has a huge amount of data and could create their own algorithms that check for similar code in the database. It wouldn't be perfect but it could do simple string comparison against other work.