There actually is another way. Event-Sourcing. I've never seen this technique in my limited experience though, but theoretically this would be the optimal way to implement such a feature, and obviously has to be a engineering decision from the get-go.
Alternatively you could also have a very primitive form of version control : recycle-bin (edit: actually this is the same as point 1 =) )
"Similarly, Joseph Gentle who is an ex Google Wave engineer and an author of the Share.JS library wrote: Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. ... Wave took 2 years to write and if we rewrote it today, it would take almost as long to write a second time."
A useful analogy might be how accounting ledgers work. You never really delete anything; you just keep appending records saying what you've done. The balance is just the sum of all those operations. (Ledgers are a bit special in that usually all the operations in a ledger are commutative (+/-), but other than that it seems a pretty good analogy.)
First you start treating each event/operation as a fundamental part of your problem. Each Event can then have a opposite reverting Event. So if you want to reverse an action, you can just apply the opposite event. You can also display a list of recent events that have taken place easily:
- user X modified article Z
- user Y deleted article Z (revert?)
Note that I've only studied this in passing, and have never applied this in practice, so I can't answer just how effective this could be. Also not really sure how this works when other actions have already taken place, or how you can detect if an event cannot be recovered from (if there is such a thing).
To be fair, I don't think this is a problem with reified generics. The problem is that primitives cannot be erased to Objects in Java. Stream<int> only works in C# because int is an alias of Int32. Not because of reified generics.
The above should no longer be an issue when/if we remove the primitive types in Java 10(?) as we can safely use Stream<Integer> and Predicate<Integer> (or Stream<int>).
The runtime type erasure would have to be quite clever to realize that a ArrayList<Integer> should be backed by a primitive array of ints? One alternative is to erase ArrayList<T> into all variants (One for objects and one for each primitive type).
(In C# System.Int32 is a value type, and an "int" is equivalent). A List<Int32> is equivalent to a List<int> and is backed by an array of ints.)
Don't be. These kinds of problems are algorithmic problems and have very little bearing on your capabilities as a developer from an engineering perspective.
For me, its like saying someone who is poor at physics makes a poor accountant.
While a fun exercise, I feel that it doesn't actually test for actual programming ability as it actually requires the programmer to be familiar with this particular subset of algorithms first (something that isn't entirely fundamental imho). I do not believe many jobs require the skillset that this problem tests for.
For example, I had very little idea how to approach this problem in an optimal way. But once I read the comment above I figured out a pretty brute force method. You can get 34 by just joining contiguous segments in each column with segments in their neighbouring columns if they have the same row and height. With some trimming I managed to get 30... not really sure if 29 is actually possible actually, I can't find where I can cut down on 1 more.
Edit: found it. Very tricky edge case with my algorithm. Happy with 30.
> While a fun exercise, I feel that it doesn't actually test for actual programming ability as it actually requires the programmer to be familiar with this particular subset of algorithms first (something that isn't entirely fundamental imho).
I don't think you are required to know this particular subset of algorithms first -- I certainly did not, and it took me about half an hour of searching around the net until I found the right terms to use.
Perhaps max cardinality bipartite matching is something I know from my first year at CS, but I didn't know its application to this problem before I had read through several different answers on StackExchange.
The A320 (introduced in 1988) uses only fly-by-wire. There's been almost 6000 of them built since. The Boeing 777, 787, Airbus A330, A340, A380 and forthcoming A350 are all fly by wire. It's a proven technology which has millions of flight hours behind it.
To fly something the size of a modern airliner, you need assistance. Even if they all used traditional control systems you'd still need functioning hydraulics to control the aircraft. That's why we build lots of levels of redundancy into the system. Rather than making something which can gracefully degrade to manual control, we make something which is so failure resistant that it never loses power.
virtuz: It seems your account is dead. Not sure why - your comment history doesn't make it obvious. Maybe worth asking HN admins about?
Mechanical systems fail too. Cables can snap, gears can shear off, pedals can bend. As long as you require an appropriate level of redundancy, there's no reason an all-electronic plane should be any less safe.
In both of these cases, rapid decompression from a failure of the cargo door caused parts of the floor to collapse and sever or restrict the mechanical cables leading to the aft control surfaces of the plane. Although the American Airlines pilots were able to land the plane safely (they still had some control), the Turkish Airlines pilots were not so lucky.
Furthermore, even mechanical systems are assisted by hydraulics. Hydraulic systems can fail, usually through draining, leaving the flight uncontrollable. So to add to the comments by both lmm and leoedin, safety has no bearing on whether the aircraft is mechanically controlled or electronically. There are also deployable ram air turbines that can provide a small amount of emergency backup power in the event of a total failure of all engines:
Why is this new in 2012? The "event" component is generally implemented using the Observer/Listener pattern in Java and has been done like so for yonks. The most "modern" concept he alludes to in that section is the idea of everything being modelled as operations (and sub-operations) which is pretty much Command-Query-Responsibility-Segregation.
This obviously is not convenient in terms of web facing applications. Don't forget that web pages are intrinsically static which means your model has no way of pushing events to your view. Unless we (in 2013) are suddenly advocating that every site now should be using WebSockets (Just as bad as using PostBacks in ASP WebForms... unless you are writing a client-side application)? And lets not forget that each web application endpoint (HTTP_METHOD + URL) is in itself already an intent/operation described here -> GET = Give Me Information. POST = Mutate Information.
In any case, MVC isn't fully dead as it has evolved into MV(C)VM instead. Instead of the View getting the data from the Model, the Controller gives the required information to the View.
Maybe you mean "devolved". MVC is alive and kicking where I sit. Instead I constantly see the passing corpses of MVVM and CQRS projects... Honestly, not trolling - this is what I have seen for many years.
Well it depends on how people are using it I guess? ;) just because there are plenty of MVVM/cqrs projects that have failed doesn't mean that mvc is more successful... One could propose that devs are more familiar with the steward of traditional architecture?
Pretty much all iOS projects are MVVM - See UITableView.
AngularJS is arguably MVVM: in this case the views are databound to the view model which is held by the controller, which is essentially the "Events" component.
Additional Edit: All MVC Java Swing apps could arguably be classified as MVVM -> all your JComponents are all... MVC! Which really leads me to think that MVVM is really just nested MVCs... perhaps =)
Well, the problem is that if you find Controllers badly defined in MVC, good luck in understanding how to use View Models properly. MVVM seems to me on many levels a re-hash of the old Document/View concept, which is... not so good, because of coupling. And in fact, Many MVVM codebases end up being tightly coupled - bear in mind, also many MVC projects, especially of the ActiveRecord sort.
My comment obviously relates to my experience, which is admittedly more webby than application-y.
I thought iOS was inherently MVC though (or at least, that's what they claim)
I'd argue that MVVM is a subset of MVC. So technically they're accurate.
And also, anything that is "badly done" becomes bad. I'm not really sure why it is hard to understand View Models... your controllers shouldn't define your view models. Your VIEWS determine what information is needed to render itself. Any templating engine that requires you to pass in a model (i.e. Handlebars, Jade etc.) is, in effect, MVVM. This makes your views more concise since it doesn't get access to a whole model when it isn't required. Its dependency is specifically only on the View Model itself.
If you haven't already, I highly recommend hitting the gym. The amount of self-confidence you get when you look in the mirror and see yourself in good shape is priceless. That, and the number of hours you'll be sitting in that chair will pretty much require if it you don't want to suffer any health-risks =)
Everything goes from there... you talk to people better, you posture yourself better, eventually you'll look at yourself better and maybe even start believing that you could do it.
Daryl Teo - also a loser like you. But that only makes victory more sweet.