I am building a foundational layer for building C++ apps using Bazel.
I am working on creating a standardized set of paths and third party libraries that work seamlessly across multiple developer teams. Allowing library upgrades to happen transparently in the background. This will enable developers to focus on business specific logic and not have to worry about the intricacies of the build system and allowing to "magically" work in the background. This is allow foray into Bazel and using it as a learning exercise to master it.
It's not JeanHeyd Meneide's (PhD) first time rage quitting. He has a history of this. He previously rage quitted from the C++ community and moved to the C community. I wonder where he's moving to next.
https://youtu.be/vaLKm9FE8oo
Yeah, characterising that as rage quitting is disinformation. When an organisation clearly supports misconduct by an institutionally powerful figure it is correct to call into question whether the organisation itself should continue to receive support. I'm certainly never giving a cent to the FSF ever again.
creating extreme consequences for unethical behavior unrelated to the mission of the organization itself is problematic because we don't live in a universe where you can know things with 100% certainty.
in fact this whole rust fiasco is an even greater and more blindingly obvious example of causing massive organizational rupture over a minor accidental personal slight.
should there be more process in place for rust leadership? maybe, probably. But I will skip rust and learn zigg if I need to do something low level due to this disproportionate response. It's not appropriate.
It doesn't really matter if the ethical issues are tangential to the mission if the organisation can't disentangle itself from them. But in this case of the FSF the issues are not remotely tangential, they are deeply interfering with the mission. Almost all the leadership have been tainted by a clear refusal to put safeguards in place about leadership misconduct. It wouldn't be acceptable at a public corporation, and FSF claims to be a principle-driven organisation.
I don't see the rust situation as a massive rupture -- they did several things wrong and their process for handling it was clearly broken, and they will choose a new team who will have process and policy to the forefront of their considerations. Refusing to use rust because of some (so far, short term) management issues is an excessive response.
Is the code for the Search project in the mono repo as well? How does Google handle access control for their mono repos? Where's the secret sauce stored?
There is directory / file level ACL. Due to AI the secret sauce isn't as important as all of the data. Recommendation algorithms don't need to be super confidential since it ultimately turns into "make content that people will want recommended to them."
The author of the article should replace software engineer with software developer. Not being a gatekeeper but this is the proper term for the role described in the article.
I find the concept of Curve pools (liquidity pools) like UST/3CRV, dual token systems, algorithmic pegged stablecoins, and services like Achor very complicated to follow. Is there any good literature that I can read the help understand the relationship between these components. Do they map existing components in traditional finance?
Not a specific resource but a good one to start with would be MakerDAO DAI. Note that when people in the space talk about "will algorithmic stablecoins ever work?" They'r usually referring specifically to noncollateralized or undercollateralized stablecoins (like UST). DAI is overcollaterallized so its fundamental model is not really under debate recently. However, DAI is the first major decentralized stablecoin and a lot of concepts and terminology reappear in later ones so understanding what it does and how it works is a great base for understanding algorithmic stablecoins. Much like understanding Bitcoin is useful when looking at other blockchains.
For AMMs, same applies to Uniswap. Their whitepapers(V1,V2,V3) are both accessible and formal enough to be useful.
Projects like Terra and Anchor are messy enough (some might say willfully misleading and opaque) that you want to be able to recognize lingo and patterns (and, ideally, source code) to come out with somewhat useful results. In some cases, you'll need to be monitoring their Discord, Twitter, and/or Telegram groups to follow them properly - though in those cases, I've taken that as enough of a red flag in itself to disqualify the entire project from my attention.
imo the problem with UST vs DAI was not under vs over collateralisation, but rather that Luna supporting UST had its value tied to UST adoption. On the other hand, DAI is collateralised by things like Eth, which could also in theory crash, but get their value from a much wider ecosystem than just DAI - hence one potentially avoids the circularity.
Usually DeFi platforms will have a lot of tutorials and documentation. Usually the price of an asset in a pair is computed automatically using the ratio asset1:asset2. Stablecoins also have their own docs. For example, terra’s whitepaper explains the fundamentals (mint tokens if the price is too high, burn tokens if the price is too low).
Yes. There is extensive literature on bank runs and similar failures caused by fractional or fraudulent reserves and lack of confidence, insurance, and regulation.
Can you please go into more detail on these "interesting properties"? I fail to see anything in terms of real analysis in the links you gave, simply descriptions of automated trading rules simple enough to be understood, but mathy enough to sell to the marks if cryptocurrencies, backtested on the growth stage of a bubble.
It solves the problem of decentralised liquidity. An order book with traditional market makers is inherently centralised, here we have incredibly simple algorithms for trading between two assets with no intermediary, and the complicated, HFT market makers of traditional finance are replaced by passive liquidity providers.
Not sure why you mention backtesting, or how that would really apply.
So basically, there is no interesting inherent property (no interesting math, no deeper dynamics), it just "solves" a problem DeFi created - and it only introduces a whole new layer of possible implementation errors and new types of risks like impermanent losses.
Backtesting is relevant because there was no stability analysis, no simulated long drawn out bearmarket, no adversarial probing. Just some minimal quant sugar to make the new gambling opportunity go down with the suckers.
In real finance, you have much more sophisticated and diversified trading algorithms babysitted and regulated for stability, with much higher effective decentralisation and control since different HFT funds are legally barred from conspiring against the traders. Oh, and much lower fees as well.
I'm completely bewildered at what you mean in this context, given that the most conservative possible viewpoint on stablecoins is "there is no algorithm that can produce a stable coin other than 100% or better reserves in the stated currency".
Remember that "conservative" in finance means not taking risks, rather than the current political meaning of "right wing authoritarian".
which "$(echo $0)"
should work no matter the invokation ;p
if full path, it still returns full path, if just "bash" or such it finds it from $PATH, badoom!
The only reason anyone would want to say "$($echo $0)", echoing the variable through a subshell and substituting it with itself, is to rely on the subshell parameter splitting to remove duplicate field separators from the variable.
That is likely not the case there, and any such file names are unlikely to exist. Just say: which "$0"
As the author notes in the 2nd section, that will return the location of the shell binary in the user path, which is not necessarily the one that is currently running.
Yeah, it's a little meta. I was afraid that what I was trying to say wouldn't quite come across. It's why I mentioned "the shell I'm currently typing in" so many times.
echo $0 will tell you the exact path if it was so executed as a login shell (or otherwise with a full path). If the shell was executed otherwise, then 'which $0' will tell you where it came from, because it must have come from $PATH.
> If the shell was executed otherwise, then 'which $0' will tell you where it came from, because it must have come from $PATH.
Not true: you can be in a shell not in $PATH, and if you executed it from a relative path, which $0 will not find it if you cd to a different location.
I have recently taken up a project to build out a dashboard for realtime data analytics. I did some exploratory research of the available data analytics frameworks mainly in Python. I found Plotly to be very popular. How much extra work would it be to just use D3 on the frontend with a standard python web server on the backend versus using just the Plotly library?
That's the first thing I noticed about the list. I interviewed with Lyft last year and they did whiteboard style interviews. Since when did this change?
I am working on creating a standardized set of paths and third party libraries that work seamlessly across multiple developer teams. Allowing library upgrades to happen transparently in the background. This will enable developers to focus on business specific logic and not have to worry about the intricacies of the build system and allowing to "magically" work in the background. This is allow foray into Bazel and using it as a learning exercise to master it.
reply