If this is the case, this can be gamed. People can use stolen documents. Nothing says a person can’t own multiple computers so what happens if someone uses your id in 20 laptops? Will the companies just claim “but the machine said they where old enough?” The law may not have teeth, but will violate privacy.
Something like https://protocol.humanidentity.io (disclaimer: I built it, sorry for the plug) or any other privacy preserving service might work better. A platform can then require that a person verifies age in a privacy preserving way before viewing adult content.
I really like your solution. Have you considered making connections with well connected individuals and potentially making small compromises on your products integrity to appeal to the people who would make this a legislated standard across the board?
Or perhaps golfing at the right clubs to make it a defacto industry standard like ID.me seems poised to become?
I hate seeing stuff like this once and then never again due to people who are capable of making something this… Good being unable to “play the game” or whatever optimize to break the social-moral glass ceiling for a given problem space.
Thank you, this is very early stages. Still trying to validate the idea. But yes, the reason there is a sovereign verifier tier is because I am sure governments will want their own rules, and the protocol is meant to be decentralized. So one govt can legislate that they are the exclusive verifier for their country, while another takes a more hand off or hybrid approach.
From what I’ve seen, this always ends in some small fine/settlement and “no admission of guilt”. This type of protection is the source of these mishaps.
This is a hard problem. I understand DEI, and I understand anti DEI, and agree with both in that DEI aims to close a gap in disadvantaged groups that were discriminated against in the past to the point that they are demographically cut off from wealth because they can’t gain any escape velocity to allow them to acquire assets that work for them. They’re locked into the jobs the wealthier don’t want, regardless of their skill level. And even getting to that skill level is a challenge because they’re too busy working 3 low paying jobs to attend school.
On the other hand, members of a demographic that indirectly benefitted from discrimination in the past, but were themselves not direct beneficiaries can’t be punished for that. They apply to jobs just the same, compete just the same, and aspire to be successful just the same. And just because they can’t relate to the fact that the other demographic has it harder, doesn’t make them evil.
So where’s the balance? What’s the solution? And companies implemented DEI bc they were pressured to do it, and now they’re pressured in the opposite direction. Will they be sued again in a few decades for the other kind of discrimination of “we only hire the best, it just so happens that all the best are of X demographic”?
This is why you only use virtual cards for subscriptions. I have never had this issue. Even adobe couldn't get a cent over what I was willing to pay. Didn't let me cancel the account, fine. Kill the card. You don't have to beg to these companies.
My process: start ideating and get the AI to poke holes in your reasoning, your vision, scalability, etc. do this for a few days while taking breaks. This is all contained in one Md file with mermaid diagrams and sections.
Then use ideation to architect, dive into details and tell the AI exactly what your choices are, how certain methods should be called, how logging and observability should be setup, what language to use, type checking, coding style (configure ruthless linting and formatting before you write a single line of code), what testing methodology, framework, unit, integration, e2e. Database, changes you will handle migrations, as much as possible so the AI is as confined as possible to how you would do it.
Then, create a plan file, have it manage it like a task list, and implement in parts, before starting it needs to present you a plan, in it you will notice it will make mistakes, misunderstand some things that you may me didn’t clarify before, or it will just forget. You add to AGENTS.md or whatever, make changes to the ai’s plan, tell it to update the plan.md and when satisfied, proceed.
After done, review the code. You will notice there is always something to fix. Hardcoded variables, a sql migration with seed data that should actually not be a migration, just generally crazy stuff.
The worst is that the AI is always very loose on requirements. You will notice all its fields are nullable, records have little to no validation, you report an error when testing and it tried to solve it with an brittle async solution, like LISTEN/NOTIFY or a callback instead of doing the architecturally correct solution. Things that at scale are hell to debug, especially if you did not write the code.
If you do this and iterate you will gradually end up with a solid harness and you will need to review less.
> After done, review the code. You will notice there is always something to fix. Hardcoded variables, a sql migration with seed data that should actually not be a migration, just generally crazy stuff.
>
> The worst is that the AI is always very loose on requirements. You will notice all its fields are nullable, records have little to no validation, you report an error when testing and it tried to solve it with an brittle async solution, like LISTEN/NOTIFY or a callback instead of doing the architecturally correct solution. Things that at scale are hell to debug, especially if you did not write the code.
For that I usually get it reviewed by LLMs first, before reviewing it myself.
Same model, but clean session, different models from different providers. And multiple (at least 2) automated rounds of review -> triage by the implementing session -> addressing + reasons for deferring / ignoring deferred / ignored feedbacks -> review -> triage by the implementing session -> …
Works wonders.
Committing the initial spec / plan also helps the reviewers compare the actual implementation to what was planned. Didn’t expect it, but it’s worked nicely.
I agree! It should be very stable, IMO. If not, then please send a bug report and we'll look into it. Also, now it scales well with the number of listening connections (given clients listen on unique channel names): https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit...
The LISTEN/NOTIFY feature really just doesn’t get enough PR. It is perfectly suitable for production workloads yet people still want to reach for more complicated solutions they don’t need.
I find it very interesting that you assume this method would branch out to other projects. I find it even more interesting that you assume all software codebases use a database, give a damn about async anything, and that these ideas percolate out to general software engineering.
Sounds like a solid way to make crud web apps though.
GP is clearly providing examples of categories of tasks. Sure, not all languages do “async fn foo()”, but almost all problem domains involve some sort of making sure the right things happen at the right times, which is in a similar ballpark.
Holier than thou “yeah well I work on stuff that doesn’t use databases, checkmate!” doesn’t really land - data still gets moved around somehow, and often over a network!
I think of it not as "last step in thinking", but as "first contact with reality". Your mind is amazing and lying to you, filling in gaps and telling you everything is ok. The moment you try to export what's in your mind, math stops mathing. So writing is an important exercise.
This is fine. But companies seem to not have a control lever for employee wellbeing. If humanity works to solve problems, don’t you think overwork is also a problem that needs to be addressed?
“The rationale for the change, according to Rodriguez, is that interaction data makes company AI models perform better. Adding interaction data from Microsoft employees has led to meaningful improvements, he claims, such as an increased acceptance rate for AI model suggestions.”
Things the problem with promises. Of course using the data makes your model better, and everyone new this and that you’d be tempted to use it, and that is why you felt motivated to promise not to use it in order to gain adoption.
So rug pulling by saying it helps improve your model is meaningless. You can say you lied. Lying is a real word, it should be used when it applies.
Something like https://protocol.humanidentity.io (disclaimer: I built it, sorry for the plug) or any other privacy preserving service might work better. A platform can then require that a person verifies age in a privacy preserving way before viewing adult content.
reply