They stop growing a full amount of low value subsistence crops needed to survive and start growing cash crops on some portion or on all of the land. Those cash crops have a higher value.
An example - say you have 4 acres of land and have a family of 4.
In the old world, say you needed one acre per person to grow enough food to the next crop harvest. This would be something like corn or potatoes that can keep. So all your land goes to growing food to survive and you cant make any money.
In the new world, with irrigation, you can do much more - say for the sake of argument, 4 times the crop, in the same space. Now, you only need 1/4 of an acre per person or an acre for everyone. So you grow vegetables that sell for 10 times as much on the 3/4s of land you have that you no longer need to use to survive.
Or even better, you grow high veg on the entire piece of land for income and use the cash to buy your corn and potatoes or whatever as you need them.
Just as all other commercial farmers do across the world.
In other words, solar allows them to become small business owners.
TL;DR version - its about money and business balance sheets, not about technology.
For businesses past a certain size, going to cloud is a decision ALWAYS made by business, not by technology.
From a business perspective, having a "relatively fixed" ongoing cost (which is an operational expense ie OpEx ) even if it is significantly higher than what it would cost to do things with internal buy and build out (which is a capital expense cost ie CapEx), make financial planning, taxes and managing EBITDA much easier.
Note that no one on the business really cares what the tech implications are as long at "tech still sorta runs mostly OK".
It also, via financial wizardry, makes tech cost "much less" on a quarter over quarter and year over year basis.
There are many setups where this is not just not possible. In some cases, doing this is prohibitive because of cost or prohibited by law.
+ for case of cost: lots of very large companies have prod environments that cost big $$$.
Business will not double prod cost for a staging environment mirroring prod. Take an example of any large bank you know. The online banking platform will cost tens if not hundreds of millions of dollars to run. Now consider that the bank will have hundreds of different platforms. It is just not economically feasible.
+ for the case of law: in some sectors, by law, only workers with "need to know" can access data.
Any dev environment data cannot, by law, be a copy of prod. It has to be test data, even anonymization prod data is not allowed in dev/test because of de-anonymization risk.
Given this, consider a platform / app that is multi-tenant (and therefore data driven ) eg a SaaS app in a legally regulated industry such as banking or health care. Or even something like Shopify or GMail for corporate where the app hosts multiple organizations and the org to be used is picked based on data (user login credentials).
The app in this scenario is driven by data parameterization - the client site and content are data driven e.g. when clientXYZ logs on, the site becomes https://clientXYZ.yourAppName.com and all data, config etc are "clientXYZ" specific. And you have hundreds or thousands of clentsAAA through clientZZZ on this platform.
In such a world, dev & test environments can never be matched with prod. Further, the behaviour of the client specific sites could be different even with the same code because data parameters drive app behaviour.
Long story short, mirroring staging and prod is just not feasible in large corporate tech
Nethack is one of the wildest. So many hardcoded edge cases and wild interactions.
From the wiki: "Food rations have a 1/7 chance of being rotten when eaten if they are uncursed and older than 30 turns, or else are blessed and older than 50 turns, while cursed food rations are always rotten. Food rations can be thrown to tame domestic canines and felines and pacify domestic equines. "
The game takes on a new level when you find you can build an army of big cats, or gallivant about with a lance on a warhorse (P.S. nevertheless this is still newbie stuff. I never got very far after many hours of attempts...).
I recommend trying Dungeon Crawl Stone Soup also. The devs are constantly refining the game to increase the fun factor and aren't afraid of removing decades old features to do it. E.g.: In more recent versions there is no food hence you cannot starve to death (a questionable game mechanic in a type of game lacking any real "economy").
The hunger mechanics were made to not leech down the machine as an user and stop grinding uselessly. As of DCSS, it's more ARPG bound than a Roguelike.
The piece is good but I think the primary segmentation is not 'useful' vs 'valued', it is strategic vs. tactical.
The author actually realizes this but did not nail this idea to the church door as part of his manifesto.
>Being valued, on the other hand, means that you are brought into
>more conversations, not just to execute, but to help shape the
> direction. This comes with opportunities to grow and contribute
> in ways that are meaningful to you and the business.
The first part is not being 'valued'; this is being a 'useful strategically'.
The second part - "opportunities to grow and contribute in ways that are meaningful to you and the business." - that is being 'valued strategically'
> Being useful means that you are good at getting things done in a
> specific area, so that people above you can delegate that
> completely. You are reliable, efficient, maybe even
> indispensable in the short term. But you are seen primarily as a gap-filler,
> someone who delivers on tasks that have to be done but are not
> necessarily a core component of the company strategy. “Take care
> of that and don’t screw up” is your mission, and the fewer
> headaches you create for your leadership chain, the bigger the rewards.
The first is not being 'useful'; this is being a 'useful tactically'.
The second part, "Take care of that and don’t screw up” is your mission, and the fewer headaches you create for your leadership chain, the bigger the rewards." is being 'valued tactically'.
So, the theory is every member of staff is dropped BOTH a 'useful' and 'valued' bucket for tactical work and for strategic work.
ie:
- one can be useful or not useful for strategic or tactical work or both
- one can be valued or not valued for strategic or tactical work or both
A couple of counterpoints:
1. You can,unfortunately, be useful strategically and not be valued. Think about the hachet man every leader of a large organization has - the guy who does the layoffs. That slot is useful strategically but can be filled by almost anyone - it is not valued by the org.
2. You can, fortunately, be useful tactically, useless strategically, and be be very very valued in an organization. Best examples of this are folks who are very very good at running operations. Think about a good truck dispatcher, or a 911 operator or an air traffic controller. 90% of their job is effective tactical execution - dealing with this emerging situation right now effectively and efficiently. That is highly valuable to organizations.
Also note that every org needs strategy people and tactical people for long and short term.
One is not better than the other. They are just different.
And there are lots of very highly paid tactical roles, sometimes better paid, that are more challenging and more interesting than any strategy role.
These tend to be "do this or fix this thing right now efficiently and effectively" jobs.
For example, almost any practicing medial role is a tactical one - ER doctor (fix this sick person right now) or controllers for real time stuff - concert and live TV producers (make this thing look good right now), air traffic controllers (keep these planes safe right now) etc etc.
So, net net, pick you spot - tactical vs strategic or both, useful vs. valuable or both - get good at it and then may the odds always be in your favor.
I would think that anyone working in a sewer inspection van would keep the door open because it is highly likely that sewer inspection vans smell like, well, sewer.
If the van is loaded with equipment, or even if it isn’t, theft and robbery are common in most of the US. You can’t leave a van door open and not be extremely vigilant.
I understand the primary premise about the difficulty with testing SQL and fully agree with it.
I do have a question though - while I understand how functors can help make the problem easier to tackle, I am not sure I fully understand how functors are different from a similar existing tool - stored procedures.
Some DB flavors:
- can take tables as arguments to stored procedures
- can return tables
- also offer the additional benefit of being able to run almost all flavors of SQL commands ( DDL, DML, DQL, DCL, TCL) in those stored procedures
Netezza stored procedures, for example, can do what you describe here:
Am I missing something fundamental here? How are functors different from stored procedures? To me, they seem to be just a sub-class / sub-type of stored procedures.
The goal is that the composable parts get woven into an efficient, planner-friendly query. Stored procedures completely undermine that unless something very exciting has happened since last I checked (SQL Server, but probably applies to all of them). You will likely end up in a "row by agonizing row" situation.
(Okay, maybe you can find a few cases where stored procs work decently, but in general both composability and performance will be much worse than the proposed functors.)
OK, this I understand, that is a good insight - cursors are row-processing based so its gonna be slow
I think Netezza, SQL Server and Oracle are all cursor-based processing "by default" so this makes a lot of sense. I suspect that they all have bulk operation capability but can't immediately think of how I would have worked bulk processing in a way that maps to this article - maybe something like analytic functions like windowing, partitioning etc. that is definitely not row by row.
Having said that, the examples I see for actual testing in the article are DQL / DML so would be multiple row processing by default .. yes, the functor definition / creation is a DDL process but it is a "do once and reuse the definition" thing (like, the author correctly observes, a view, which is the point of functors) and the functor in use would just be DML. In which case, functors go back to looking like stored procedures...
I also understood composability as being built in for SQL - for example, in Oracle, packages allow composability of stored procedures, triggers, sequences etc allow composability of DML and views allow composability of queries and tables - which the author points out in the article.
With functors, DDL, DML, DQL, DCL, TCL would still be the only command tools available unless a new command language was invented for SQL for testing - let call that something like DTL (Data Test Language), with a whole new bunch of associated new SQL keywords, capability and functionality that are built right into the core of the DB engine that are optimized for what functors are trying to achieve.
Regarding "can't immediately think of how I would have worked bulk processing in a way that maps to this article think of how to map composibility" ...
I believe stored procedures where you construct dynamic sql and execute the results can basically provide the composability/performance described with bulk non row-based logic. If you keep it simple it can work ok.
They seem somewhat like stored procedures, but not stored? As in a query can contain a functor in it and then use it immediately. I didn't see those `create functor` statements as anything other than ephemeral - or am I wrong?
EDIT: also stored procs that use imperative logic and cursors can be quite a bit slower than queries that achieve the same logic - the capability here is purposefully a subset of that and is just to help build standard SQL queries that can go through the standard query planner.
I think they have to be long lived else they cannot make sense for performant testing. ie they are created as DB objects, using DDL, in the same way tables, views, functions etc are made.
They can certainly be created at test run time but that would slow things down a lot - you would essentially be creating a ton of objects every time you run the test which means having a setup to test if they exist or not, take them down if they do or fix them if they don't match spec ( e.g. column and data type changes etc etc )
The more I think about this, the more complicated I realize it would be to manage this dynamically:
You essentially have to build a test harness enviroment that figures out your testing elements dynamically from your data environment (with some kind of parameterization engine and data set to tell it what to look for so as to "make functors and run them" (e.g. all PKs of FKs or all columns starting with a certain prefix or all columns of a certain data type etc etc), gets the most up to date definitions of those elements from system tables and uses that data to create or update or drop functor objects ... wow, ok, this is getting complicated, I am going to stop now before I see the void.
Formulation matters and is very important.
A1 jet fuel, propane, regular 87 gas and vaseline are four different formulations of some version of mineral oil (petroleum).
Which do you want in a car you are driving? On your parched lips? In your plane engine? Coming into your kitchen stove?