Hacker Newsnew | past | comments | ask | show | jobs | submit | jstrom's commentslogin

This paper produces a good result: http://large.stanford.edu/courses/2007/ph210/otey2/. If you game is OK sounding synthetic, I've had good results simulating a string plucked off center, decay each harmonic proportional to the frequency, then adding reverb.

A sound font or VST plug-in with appropriate licenses may be another route, but I can't speak to how difficult that would be to work with. It's on my to-do list.


Yes this looks perfect for what I'm trying to do. Thanks!


I wish comparisons like this would include unicode coverage. I have code that uses the symbols, arrows, etc. to avoid needing image assets. Or some non-English text that need accented characters or the CJK glyphs.


The easiest solution would be to allow custom text (with user selectable syntax highlighting).


The Cypress PSoC might be of interest. It's a combination FPGA and microcontroller, the low-end demo boards with USB programming get as low as ~$10.

Though most of what you can do with the FPGA would be covered by built-in timer/capture modules or the PICO PIO modules on other chips (though IIRC, some of the boards have analog support in the FPGA if you wanted to do real-time audio processing). I have not found a particular hobbyist use for them, only know it from my father teaching some computer engineering courses and picking since he could use the same board for digital logic and microcontroller programming.


You can theory-craft a non-malicious justification: Insurance trusts Dr. Hibbert will perform the procedure without complications and has negotiated a fixed price of $1K. Insurance doesn't trust Dr. Nick and believes any procedure he performs will result in a second claim later to set things right. Insurance strongly wants you to chose Dr. Hibbert and the only leverage they have is to refuse to pay for Dr. Nick should you go with him.

Not sure how plausible that is though--I suppose they could data-mine frequency of follow-up treatment required per doctor--but I've never observed in network/out of network to correspond to a meaningful metric (our local dentist recommended by all the dental specialists around doesn't deal with any insurance companies, while the in-network dentist is pretty clearly padding their work)


Why doesn't the insurance company trust Dr. Nick?

If there's reason to not trust Dr. Nick, then surely the insurance company must disclose it. If there's reason to not trust Dr. Nick, then surely Dr. Nick would have trouble maintaining a medical license. If there's reason to not trust Dr. Nick then surely Dr. Nick's own malpractice insurance would become too onerous for him to keep.

No, this smells exactly like what @Someone1234 stated:

> Out-of-network was always just a sketchy way for insurance to avoid paying what they should have paid.


Sometimes the trust issue is more about fraudulent claims than one's ability to practice medicine...


Then why is he still in business? If he's committing insurance fraud, then the insurance company should work with regulators to stop and potentially prosecute him. The onus to prevent fraud should not be on me as a patient.


> If there's reason to not trust Dr. Nick, then surely the insurance company must disclose it.

That should include "we think Dr. Nick makes fraudulent claims"


Yeah. There is a legitimate quality reason but it doesn't appear to be how they actually operate. Reality is more like a local case that resulted in a lawsuit (never heard the outcome) against the insurance company for sending patients to your Dr. Nick.


I mean, Dr. Hibbert might refuse to be in network for some insurance company because he doesn't like their rates. I think that's more common


I usually filter by "Retailer: Walmart.com" for any common and/or name-brand item (and price/shipping is usually better than 3rd part sellers anyway). You still have to take your chances for more obscure items, but at least for me that's rare.


My experience with HW HSMs has been that the FIPS process is so expensive that companies are only willing to put out a new FIPS-certified version once year. Also the certification itself seems to be more concerned with high-level security requirements rather than proof that any particular features of your HSM work correctly.

So the answer to any particular bug is typically wait until next year's version which includes all bug fixes that the normal releases have built up over the past year, or re-evaluate if you really need the certification.


We typically assume that loans are taken for some immediate use (rather than just having cash on hand). Thus you get the following scenario:

Bank loans $money to person A. Person A uses the $money to buy from person B. Person B deposits $money into the Bank. Bank now has $money (less reserve requirements) available to lend.


If you attempted to sell those treasury bonds now on the secondary, you would have to accept the same $99 the shares of VFITX are worth. The immediate, liquid value is the same either way. (You need a common unit to comapre in instead of VFITX in now-$ to bonds in principle-$.)

If you hold the bonds and/or VFITX instead, the interest pay out of the bonds and the distributions of VFITX should also come out equal (except not, the fund has the advantage that it can change its composition from buying/selling bonds, but also has the overhead of selecting and performing those transactions).

(In reference to your below comment, yes, fund != holding bonds. The fund is closer to you buying the bonds, but also buying/selling as bonds mature or you anticipate changes in rates)


As mvilim noted in his response to my comment, the outcome is actually not equal, and VFITX underperformed bonds over the same period, by a larger amount than can be explained by its expense ratio.


Isn't that the logical approach to teaching it though? Start with the base case, analyze how it functions, then start looking it how it changes when you remove an assumption.

Similar to how physics starts with perfectly elastic point-masses in vacuum on a friction-less plane.


I used to think of the world of Physics 101 as PhysicsLand. It is a horrible place. There is no air. There are a huge number of infinitely sharp, yet infinitely rigid objects lie in wait to slice you to bits; even cubes can't be trusted with their infinitely sharp edges. Feathers and bowling balls are constantly falling on you from above, and there's no terminal velocity to slow them down. You can't even stand up to escape, unless you've been there long enough that friction has finally gotten installed. But beware, because shortly after that, the infinitely sharp, infinitely rigid things start whirling around.

It's a horrible place... and as far as most people make it in physics. Econ 101 may only be able to teach very simple models before most people also wander away to learn something else, but they are still useful models, and I still think people come away from discussions of how irrational humans are and how simplified economic models are and all these other things and think that the basics of Supply and Demand have therefore been disproved and I can go off and socially engineer without having to worry about them... but they haven't. The edges of the Law of Supply and Demand may fractal off into ever-more complicated corner cases as you approach their edges, most of which happen in the real world, but the core idea is still valid and you're still naked in the face of the real world's complexity if you think they aren't relevant, just as, for all the immense simplification, Physics 101 is still relevant to the real world even if it's the only physics you ever take.


> Isn't that the logical approach to teaching it though?

If you assume what used to be called “the standard social science model” centered on rational choice theory is, if not a actually right, a reasonable approximation for common conditions akin to Newtonian mechanics, sure.

OTOH, if, as seems to be increasingly common (across the social sciences), you see it more akin to Aristotelean mechanics that coincidentally looks like some real phenomenon but gets the mechanism wrong for the general outline of how things behave, then, no.


The problem is Physics 101 is a good approximation to reality, and you know where. The free markets are not a good approximation of normal markets, because the strategies of the actors are completely different.

The problem is with game theory - the limit of optimal strategies for some games is not always the same as the optimal strategy for the limit game. This breaks the ability to approximate.

So, for example, you cannot make conclusion from a game with infinite number of actors ("free market") to a game with finite number of actors.


> The free markets are not a good approximation of normal markets, because the strategies of the actors are completely different

You can use freshman economics to predict the average oil price in a given year, from tables of quantities supplied and demanded. Where one finds deviation, e.g. when OPEC was founded, meaningful new information arrived.

Most markets don't follow freshman economics which is why there is lots of interest in developing better models. But we don't start physics with CFD.


> You can use freshman economics to predict the average oil price in a given year, from tables of quantities supplied and demanded.

Not sure if I completely understand what you want to do, but if I do, this is not drawing supply/demand curves, this is just predicting the prices based on history of supply and demand. The supply/demand curves (that is, the model) is what I am criticizing.


You can't predict based on history without having a model that tells you how to extrapolate that history.


You could have a statistical model. You record supply, demand and price over time period and then you can predict price by matching it to supply and demand. No knowledge of supply/demand curves is needed.

But I am not clear if this is what parent wants to do.


> You record supply, demand and price over time period and then you can predict price by matching it to supply and demand

That's what the damn curves are! Even calling them curves is misleading. Freshman economics looks at linear systems. You take data, draw a regression and then predict a price.

Supply and demand isn't voodoo. Early economics courses are inaccurate because they start with linear models a general population of freshmen without strong mathematics training can grasp.


> That's what the damn curves are!

No, they are not. The curves are drawn at a given point in time, what you're doing here is recording supply/demand over time. You would have to assume in addition that the curves didn't change over the time period so you could say this data are the demand/supply curves.


> The curves are drawn at a given point in time, what you're doing here is recording supply/demand over time

Supply tables show producer activity at a point in time. Linear models don't model elasticity or endogeneity. Taking activity across a period in time is perfectly fine for this kind of a model.

By the way, we discovered and characterized elasticity and endogeneity by measuring deviations from said linear model. In some cases, the deviations were predictable. That expanded the box of situations in which the model was broadly useful.

Of course the introductory model isn't useful in most cases. But it (a) can be empirically validated in a predictable set of markets and (b) naturally extends itself to cover more ground, e.g. non-linear, endogenous and failure effects.

> You would have to assume in addition that the curves didn't change over the time period so you could say this data are the demand/supply curves

This is a fine assumption for a bare-bones model. If someone wants to shrink the box of uncertainty around their predictions they can learn more finance and economics.

TL; DR these models work well enough that people who understand them, and their limitations, will be able to make better predictions than those who don't.


It is if you plan on eventually teaching the exceptions.

But if 95% of the students aren't going to study long enough to make it to the exceptions, and 95% of the real world are "exceptional" markets than maybe we should rethink how we teach it.


If you assume competitive markets are "simpler" than uncompetitive markets this is certainly true. However it is questionable to me that this would actually be the case.

Rather the advantage of the traditional approach to economics isn't that competitive markets are simpler but that the axiomatic framework which is built on them can be generalized in a mathematically rigorous way to describe most kinds of uncompetitive markets. In order to provide real intellectual (rather than political) competition a different way of teaching economics would need to start with axioms of uncompetitive markets and deform them to also describe competitive markets. That doesn't seem to be happening in this book, however.


If you assume competitive markets are "simpler" than uncompetitive markets this is certainly true. However it is questionable to me that this would actually be the case.

I think it's true by the Anna Karenina principle; there are many ways for a market to be uncompetitive.


This is a broader question of pedagogy. There's first-principles approaches to teaching and then there's example-based, case-study and other approaches.

I happen to have sympathies with first-principles, but I've learned over time as a teacher that it can be quite flawed. I've grown to embrace statistical learning: show lots of complex real-world cases until the aggregate makes the statistically-significant patterns common to all the cases clear to the students. That's often actually superior to the abstract fiction of first-principles.


"physics starts with perfectly elastic point-masses in vacuum on a friction-less plane"

It's been a while but I'm pretty sure our high school physics classes started with weights accelerating (or not) down an inclined plane - i.e. we actually started with actual trolleys pulling paper tape through ticker timers...


Sorry to go off topic, but isn't this a case where "reasonable" would be a better choice of word than "logical". There's nothing particularly logical about teaching a thing one way over another.


If I'm reading the post right, it's stricter than that:

> ...distrusting certificates whose validity period (the difference of notBefore to notAfter) exceeds the specified maximum.

I.e., a certificate valid from 1/2015..1/2019 is distrusted as of Chrome 59.

And the more lax restrictions only apply to certificates that have already been issued. Any one issued after Chrome 61 are held to the highest (9 mo) limit.

> In addition, we propose to require that all newly-issued certificates must have validity periods of no greater than 9 months (279 days) in order to be trusted in Google Chrome, effective Chrome 61.


I agree with the newly issued certs. The other, that's going to be painful, and a mess to figure out, if you are right.. and that's quite possible that you are. UGH.

EDIT: Update, I'm not sure you are correct, I just downloaded the latest dev release (Version 59.0.3047.0 (Official Build) dev (64-bit)) and it accepts a rapidssl issued(symantec owned) cert valid for 1187 days, which would exceed the 1023 days.

It's also possible it just hasn't made it into the release yet, I'll have to keep like a daily eye on this, and plan to replace much much sooner just in case.


> It's also possible it just hasn't made it into the release yet,

The proposal was just made, so you shouldn't expect it to be reflected in code just yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: