Police slapping phones/cameras out of someone's hands need to get control of their emotions and bear the "I will catch you slipping up" threat with confident professionalism.
That said, the criminal-record stereo-blaster was probably lying about 'his uncle' and I don't think you get to 'you have no warrant we're done here' and shut the door on the police responding to a neighbor noise complaint. If they want to give you a talking-to or a ticket (depending on local law), they can.
I do believe the guy about his phone being "lost", though (should be easy to corroborate if they have signatures/records of personal items), and we can expect real sanction for the phone-slapper if it's clear which cop did it.
"Mrs. degree" assortative mating (+ field of study preferences) make the 'females in college X' implication muddier, but overall it's good to encourage people to look at expected outcomes before opting into $100k debt.
That is, a rational woman would look at expected earnings including child support + alimony, and insist on figures stratified by field of study (so she can choose where + what to study).
"As a rising [University student / whatever]," - have people really started saying this? "Rising" as a self-descriptor sounds ridiculous. I feel like this is at least the second time I've seen that opener.
That problem has nothing to do with linear algebra. I like it! Presumably the submatrix size is limited, or there are negative numbers. Personally I'd precompute the submatrix sums S[r,c] for row 1...r and col 1...c in r*c steps, which will then allow you to give the sum for any submatrix in constant time.
Firstly, Kalman filtering is optimal, that is, it produces exactly the correct posterior distribution. As you mention, particle filters cannot achieve this.
It's a rough heuristic that to achieve a certain accuracy for a linear/Gaussian system with a particle filter, you need a number of particles exponential in the number of dimensions of the system. I feel like this could probably be stated more formally and shown, but I don't think I've seen anything in that vein. The Kalman filter, being simply matrix operations, should scale as the number of dimensions cubed.
So yes, Kalman filtering is computationally more efficient, and (obviously) more accurate.
I also wouldn't discount the fact that the Kalman filter is, in a sense, simpler than the particle filter for a linear/Gaussian system; you don't need to worry about resampling or setting a good number of particles, and you don't need to compute estimates of the mean/covariance statistics (which are sufficient since the posterior should be a Gaussian).
Kalman Filters are super efficient to calculate - they're what kept the Apollo program on track (60s compute!). Another post gives the asymptotic complexity - but as a rule of thumb if you can do any practical computation at all you can run a Kalman Filter.
Basically they're both implementations of a recursive Bayesian filter, but the Kalman filter requires very strong assumptions about the distribution (all Gaussian) and the particle filter requires none.
The Kalman filter is optimal for the Gaussian case (and is very efficient to calculate), whilst the particle filter can use more accurate distributions but is far less efficient to calculate.
You kinda use them in different places - a Kalman filter is useless for pedestrian dead reckoning (step made + estimate of direction), whilst a particle filter would be similarly dumb on submarines.
True. Our robot uses a particle filter since we're mostly localization by odometry and lidar returns. But if we had an outdoors robot using GPS like the one in the article then our uncertainty would be a lot more Gaussian and I think we'd switch.
Also, for determining the position of your own robot the efficiency isn't a big deal. But if you're tracking a lot of objects in your environment then it becomes more important. And since you're looking at the objects and are tracking relative position the distribution is uni-modal too.
for 64 bit size_t it's pretty academic, but no, not correct.
suppose low is M-3 and high is M-1 (where M is 2^[#bits]) then mid should be M-2. but (M-3 + M-1) is (modulo M) equal to M-4. and half that is M/2 - 2, which is a lot less than the correct M-2. (pretend it's 2 bit unsigned ints so M is 4)
the correct 'mid' computation is low + (high - low)/2.
Don't know, but you still end up aligning common subsequences (advanced monotonically in both seqs) and you still have the same order-of-computing-cells allowed in the standard m*n dynamic program. Your restricted problem is basically asking for the standard unix line-diff.