Wouldn't it be more explicit if there was a designated keyword that objects could use to access their own members instead of implicitly assuming that any first argument to a method is a reference to them, only called 'self' by convention?
Honestly, I think it would have made some amount of sense to make super a method defined on the base object so that
def __init__(self, x, y):
self.super(x, y)
would be the convention. There may be some infra issue with this (and in general, `super` has to be kind of magical no matter how you handle it). But yes, in general I can vibe with "super is too implicit".
Ruby generally (I think? I haven't seen much of Ruby code) uses "@" instead of "self.", and "@@" instead of "ParticularClassName." (btw, "self.__class__" doesn't cut it), and it seems to produce no namespace pollution.
I don't think this is a naming issue at all. In the provided example, 'Accuracy' is the correct name for the parameter, as that's what the parameter represents, accuracy. The fact that accuracy should be given as a value in an interval from 0 to 1 should be a property of parameter type. In other words, the parameter should not be a float, but a more constrained type that allows floats only in [0,1].
EDIT: Some of you asked what about languages that don't support such more constrained types, so to answer all of you here: different languages have different capabilities, of course, so while some may make what I proposed trivial, in others it would be almost or literally impossible. However, I believe most of the more popular languages support creation of custom data types?
So the idea (for those languages at least) is quite simple - hold the value as a float, but wrap it in a custom data type that makes sure the value stays within bounds through accessor methods.
velocity() simply returns the first argument, after doing validity checking based on the second argument.
You probably couldn't reasonably use this everywhere that you would use actual constrained types in a language that has them, but you could probably catch a lot of errors just using them in initializers.
This got me thinking: What about a situation where the accuracy is given in a real-life unit. For example, the accuracy of a GPS measurement, given in meters. I've sometimes used names like 'accuracyInMeters' to represent this, but it felt a bit cumbersome.
Edit: Thinking more about it, I guess you could typealias Float to Meters, or something like that, but also feels weird to me.
More complex type systems absolutely support asserting the units of a value in the type system. For example, here's an implementation of SI types in C++: https://github.com/bernedom/SI
I've used "fraction" for this purpose .. but that isn't general enough. In fact a convention I've used for nearly 2 decades has been varName_unit .. where the part after the underscore (with the preceding part being camel case) indicates the unit of the value. So (x_frac, y_frac) are normalized screen coordinates whereas (x_px, y_px) would be pixel unit coordinates. Others are like freq_hz, duration_secs and so on.
Another thing you can do is define a "METER" constant equal to 1. You can then call your function like this: func(1.5 * METER), and when you need a number of meters, you can do "accuracy / METER". The multiplication and division should be optimized away.
Good thing about that is that you can specify the units you want, for example you can set FOOT to 0.3048 and do "5. * FOOT" and get back your result in centimeters by doing "accuracy / CENTIMETER". The last conversion is not free if the internal representation is in meter but at least, you can do it and it is readable.
If you are going to use such distances a lot, at least in C++, you can get a bit of help from the type system. Define a "distance" class operator overloads, constants and convenience functions to enforce consistent units. Again, the optimizer should make it not more costly than using raw floats if that's what you decide to use as an internal representation.
Some languages provide more than just an alias. Eg Haskell lets you wrap your Float in a 'newtype' like 'GpsInMeters'.
The newtype wrapper doesn't show up at runtime, only at compile time. It can be set up in such a way that the compiler complains about adding GpsInMeters to GpsInMiles naively.
> While true, if your language doesn’t support such a type
You'd be surprised where the support is. In C#, you would declare a struct type with one read only field of type double, and range validation (x <= x <= 1) in the constructor.
Yes there's a bit of boilerplate - especially since you might want to override equality, cast operators etc. But there is support. And with a struct, not much overhead to it.
I don't think runtime validation is that special. You can bend most languages into this pattern one way or another. The real deal is having an actual compile-time-checked type that resembles a primitive.
Actually what I want is just the reading experience of seeing
"public Customer GetById(CustomerId id)" instead of "public Customer GetById(string id)" when only some strings (e.g. 64 chars A-Z and 1-9) are valid customer ids.
Compile-time validation would be ideal, but validation at the edges, well before that method, is good enough.
The main issue with techniques such as this, which are certainly easy to do, is that if it’s not in the type system and therefore not checked at compile time, you pay a run time cost for these abstractions.
Great that you like typed languages, and ones that allow for such constrained/dependent typing as well.
It seems disingenuous to me to suggest that anyone using
other languages do not have this problem. And really, there are quite a few languages to not have this form of typing, and even some reasons for a language to not want this form of typing.
So please, don't answer a question by saying "your questions is wrong" it is condescending and unhelpful.
Equally condescending is saying "It's great that you like X, but I don't so I'm going to ignore the broader point of your argument."
The point remains that the fact a given parameter's valid values are [0,1] is not a function of it's name. You can check the values within the method and enter various error states depending on the exact business rules.
"your question is wrong" is indeed unhelpful, especially as a direct response to someone asking a question.
"here is what seems like a better question" is helpful, especially in a discussion forum separate from the original Q/A.
But if "here is what seems like a better question" is the _only_ response or drowns out direct responses, then thats still frustrating.
> condescending
As a man who sometimes lacks knowledge about things, when I ask a question, please please please err on the side of condescending to me rather than staying silent. (No, I don't know how you should remember my preferences separately from the preferences of any other human)
I'm genuinely sorry if I came across as condescending, that was not my intention at all.
I merely wanted to point out that, in my opinion, this property should be reflected in parameter type, rather than the name. Just like, if we wanted a parameter that should only be a whole number, we wouldn't declare it as a float and name it "MyVariableInteger" and hope that the callers would only send integers.
You mentioned that there are quite a few languages that do not permit what I proposed, would you mind specifying which ones exactly? The only one that comes to my mind is assembly?
So, then the user calling the library with foo(3.5) will get a runtime error (or, ok, maybe even a compile time error).
To avoid that, you need to document that the value should be between 0 and 1, and you could do that with a comment line (which the OP wanted to avoid), or by naming the variable or type appropriately: And that takes us back to the original question. (Whether the concept is expressed in the parameter name or parameter type (and its name) is secondary.)
> So, then the user calling the library with foo(3.5) will get a runtime error (or, ok, maybe even a compile time error).
I'm not sure I understand this. See below, but the larger point here is that the type can never lie -- names can and often do because there's no checking on names.
I think what is being proposed is something similar to
newtype Accuracy = Accuracy Float
and then to have the only(!) way to construct such a value be a function
mkAccuracy :: Float -> Maybe Accuracy
which does the range checking, failing if outside the allowable range.
Any functions which needs this Accuracy parameter then just take a parameter of that type.
That way you a) only have to do the check at the 'edges' of your program (e.g. when reading config files or user input), and b) ensure that functions that take an Accuracy parameter never fail because of out-of-range values.
It's still a runtime-check, sure, but but having a strong type instead of just Float, you can ensure that you only need that checking at the I/O edges of your program and absolute assurance that any Accuracy handed to a function will always be in range.
You can do a similar thing in e.g. C with a struct, but unfortunately I don't think you can hide the definition such that it's impossible to build an accuracy_t without going through a "blessed" constructor function. I guess you could do something with a struct containing a void ptr where only the implementation translation unit knows the true type, but for such a "trivial" case it's a lot of overhead, both code-wise and because it would require heap allocations.
You're solution is the ideal one and safest, although in the interest of maximum flexibility since the goal here seems more documentative than prescriptive, it could also be as simple as creating a type alias. In C for example a simple `#define UnitInterval float`, and then actual usage would be `function FuncName(UnitInterval accuracy)`. That accomplishes conveying both the meaning of the value (it represents accuracy) and the valid value range (assuming of course that UnitInterval is understood to be a float in the range of 0 to 1).
Having proper compile time (or runtime if compile time isn't feasible) checks is of course the better solution, but not always practical either because of lack of support in the desired language, or rarely because of performance considerations.
That's fair, but I do personally have a stance that compiler-checked documentation is the ideal documentation because it can never drift from the code. (EDIT: I should add: It should never be the ONLY documentation! Examples, etc. matter a lot!)
There's a place for type aliases, but IMO that place is shrinking in most languages that support them, e.g. Haskell. With DerivingVia, newtypes are extremely low-cost. Type aliases can be useful for abbreviation, but for adding 'semantics' for the reader/programmer... not so much. Again, IMO. I realize this is not objective truth or anything.
Of course, if you don't have newtypes or similarly low-cost abstractions, then the valuation shifts a lot.
EDIT: Another example: Scala supports type aliases, but it's very rare to see any usage outside of the 'abbreviation' use case where you have abstract types and just want to make a few of the type parameters concrete.
Sure, such other languages have the problem too, it's just that they are missing the best solution. It's possible for a solution to be simultaneously bad and the best available.
In languages with operator overloading you can make NormalizedFloat a proper class with asserts in debug version and change it to an alias of float in release version.
Similarly I wonder why gemoetry libraries don't define separate Point class and Vector class, they almost always use Vector class for vectors and points.
I understand math checks out, and sometimes you want to add or multiply points, for example:
Pmid = (P0 + P1) / 2
But you could cast in such instances:
Pmid = (P0 + (Vector)P1)/ 2
And the distinction would surely catch some errors.
Point - Point = Vector
Point + Point = ERROR
Vector +/- Vector = Vector
Point +/- Vector = Point
Point * scalar = ERROR
Vector * scalar = Vector
Point */x Point = ERROR
Vector * Vector = scalar
Vector x Vector = Vector
I work in games where these values are extremely common and 'accuracy' wouldn't be very descriptive in a lot of circumstances: explosion radius falloff damage, water flow strength, positional/rotational lerps or easing, and more.
I wish I were commenting here with an answer, but I don't have one. "brightness01" is a common naming convention for values of this type in computer graphics programming, but niche enough that it got raised in review comments by another gameplay programmer.
That's a very good observation. We still could need a (new) term for this common type. Maybe floatbit, softbit, qubit(sic), pot, unitfloat, unit01 or just unitinterval as suggested?
This begs an interesting tangential question: Which programming languages allow such resticted intervals as types?
type percentage:=int[0,100]
type hexdigit:=int[0,15]
…
since this might be overkill, sane programming languages might encourage assert statements inside the functions.
I think this is right, but it's still IMO basically a natural language semantics issue. For instance in haskell (which has a pretty advanced static type system), I would still probably be satisfied with:
-- A float between 0 and 1, inclusive.
type UnitInterval = Float
foo :: UnitInterval -> SomeResultPresumably
foo accuracy = ...
i.e. I think the essential problem in the SO question is solved, even though we have no additional type safety.
A language without type synonyms could do just as well with CPP defines
Looks like it’s actually possible to string something like this together in Python; custom types are of course supported, and you can write a generic validation function that looks for your function’s type signature and then asserts that every UnitInterval variable is within the specified bounds.
You’d have to decorate/call manually in your functions so it’s not watertight, but at least it’s DRY.
Nothing wrong with the name ZeroToOneInclusive. Seems like a great type to have around, and a great name for it. UnitFloat or UnitIntervalFloat or other ideas ITT are cuter but not much clearer.
It's a succint way to say "No, not types that will automatically work as primitive types (which normally the variable passed for 0 to 1 would be), and that will work with numeric operators".
Or in other words, a succint way to say "Technically yes, but practically useless, so no".
In Elm (and many other languages, I assume, I'm just most familiar with Elm) there's a pattern called "opaque data type". [0] You make a file that contains the type and its constructor but you don't export the constructor. You only export the getter and setter methods. This ensures that if you properly police the methods of that one short file, everywhere else in your program that the type is used is guaranteed by the type system to have a number between zero and one.
-- BetweenZeroAndOne.elm
module BetweenZeroAndOne exposing (get, set)
type BetweenZeroAndOne
= BetweenZeroAndOne Float
set : Float -> BetweenZeroAndOne
set value = BetweenZeroAndOne (Basics.clamp 0.0 1.0 value)
get : BetweenZeroAndOne -> Float
get (BetweenZeroAndOne value) = value
You would just make the constructor return a possible error if it's not in range, or maybe some specialty constructors that may clamp it into range for you so they always succeed.
It's the same question of, how can you convert a string to a Regexp type if not all strings are valid Regexps?
Croatian here. You're correct, if you only said the word "spomenik" to anybody from former Yugoslavia, they wouldn't be able to distinguish whether you're talking about precisely those monuments built after WW2, or any other monument built before or after the communist era. Or even, say, Washington monument (In Croatian it's literally called "Washingtonov spomenik"). The word "spomenik" means "a monument" and just that. I was actually surprised to learn that in English the word refers to monuments from a particular location and particular era.
Well, there are other examples - to me, Bohemians are my nation, to English people, "bohemian" describes behavior that used to be seen here (and way overblown in catholic-driven news sources at the time - Prague people were protestant) during the Middle Ages
Why not? Your chances of choosing the correct door were 1 in 3 from the start. That doesn't change with the fact that the host opens the doors at random.
Sorry for the late reply, but if you look at the original article on GitHub I explore that scenario in the code. The answer is that if Monty randomly opens a door then the contestant automatically loses 1/3 of the time before given a chance to switch.
That's what the GP said. In reality, the host doesn't open the door at random. They know which door hides the car and avoid opening it. That's why switching the doors doubles your chances of finding it (because the host has already eliminated one of the goat doors).
Your chance was 1 in 3 yes, but if the host can randomly open the car door, there is a chance of 1 in 3 of that happening, leaving only 1 in 3 chwnce of switching being legal and useful.
If you only ever offered the players to switch if they picked the car, they'd soon realize this and never switch when they're offered this chance. Also, players that picked a goat at first would have no chance of winning the car.
That's assuming players knows about the other players, which is not specified in the problem description. It is not even specified if the same game is played several time with the same rules.
Given the description from the article:
> Suppose you're on a game show,
> and you're given the choice of
> three doors: Behind one door is
> a car; behind the others, goats. > You pick a door, say No. 1, and
> the host, who knows what's behind
> the doors, opens another door,
> say No. 3, which has a goat. He
> then says to you, "Do you want
> to pick door No. 2?" Is it to
> your advantage to switch your
> choice?
I'd say, no, it is not to my advantage because I'd think the host would only ask me to switch if I had taken the "right" choice and want to make me lose.
Unless I knew that the host always ask if one wants to change, in which case the paradox indeed apply.
Game show regulations would not allow that behavior. The problem assumes a typical game show in western media so the reader should assume the player knows all the rules governing the host’s behavior, which you can see in the other discussions.
> the ability to defend yourself against a tyrannical dictatorship made sense until the government developed better technology, now it's pointless so just give up your guns?
Basically, yes. Do you honestly think people would have any chance against probably the most powerful army in the world?
Sure, they could try fighting a guerilla warfare, they'd even inflict some casualties against the enemy but it's unlikely that in the end they'd succeed against an army that is professional, highly skilled, better equipped, has better offensive and defensive capabilities, knows a lot more about tactics and logistics and has trained for this type of situation on a daily basis.
> Are they going to destroy their own infrastructure?
Would they even consider it their own infrastructure? Or would they consider it infrastructure currently held by rebels, which needs to be either seized or destroyed?
> Do you think the real men and women of the military would follow orders to destroy its own hometowns and families?
I suspect a lot of them would destroy towns if they we're told that these are now enemy bases. This has been repeated in many parts of the world throughout the history, even recent one. If they wouldn't, they'd be defectors and it really wouldn't matter whether the war was fought with modern weapons or sticks and stones.
It wouldn't work as you imagine because it would be far too expensive for the government. In the Middle East it works out because none of our infrastructure is affected by the war; so our GDP, and thus tax revenue, is still strong. In a civil war where the government is bombing its own infrastructure the cost for each kill will skyrocket and the effect on the economy will be catastrophic. Fighting a defensive war is immeasurably cheaper than an offensive war as the defenders value the lives of their soldiers much less than the offenders do. Also keep in mind that a rebel faction could very easily sabotoge critical infrastructure like electricity which would be very difficult to repair in a timely manner.
Relying on very expensive advanced weaponry is the modern equivalent of relying on mercenaries, and Machiavelli told us why mercenaries are bad.
>>>Do you honestly think people would have any chance against probably the most powerful army in the world?
You are placing waaaay too much faith in technology. Look at the Saudis: one of the worlds highest military budgets, and stockpiles of first-rate western hardware.....they are getting absolutely routed by Houthis, who run up desert mountains with just sandals, an AK, and a mouth full of stimulants.
>>>Sure, they could try fighting a guerilla warfare, they'd even inflict some casualties against the enemy but it's unlikely that in the end they'd succeed against an army that is professional, highly skilled, better equipped, has better offensive and defensive capabilities, knows a lot more about tactics and logistics and has trained for this type of situation on a daily basis.
What is the data that is driving you to this conclusion? Are you ignoring pretty much every counter-insurgency experience the US has had for the last 50 years?
[2][3]
>>>I suspect a lot of them would destroy towns if they we're told that these are now enemy bases.
I suspect you don't know actual American military personnel very well, especially officers and NCOs, and how seriously we take the Laws of Warfare, AND the Constitution.
> I suspect you don't know actual American military personnel very well, especially officers and NCOs, and how seriously we take the Laws of Warfare, AND the Constitution.
You're right, I don't. But, if Americans don't need to fear that they'll have to fight the US Army, why have the 2nd amendment at all? Who would they need to protect themselves against?
Rioters, rogue police, vigilantes, rogue militas, nazis, militaries commanded by those that are not upholding the constitution. Honestly your argument makes it more sensible that we should open up restrictions and allow more lethal weapons.
> Sure, they could try fighting a guerilla warfare, they'd even inflict some casualties against the enemy but it's unlikely that in the end they'd succeed against an army that is professional, highly skilled, better equipped, has better offensive and defensive capabilities, knows a lot more about tactics and logistics and has trained for this type of situation on a daily basis.
The one that got basically wiped out despite foreign backing (though the regular army that was their most direct supporter—the North Vietnamese Army—intervened and ultimately won the war after they were crushed)? Yeah, heard of them.
They kind of prove (or at least demonstrate) the point the grandparent post was making, though.