Hacker News new | past | comments | ask | show | jobs | submit login

How do we scale social accountability and knowing?

This is expected to enable us to solve distributed coordination problems. Also, it should facilitate richer more meaningful relationships between people.

Expected outcomes include increased thriving and economic productivity.

[edit: consider the limit on how many people you can know and the relationship between how deeply you come into relationship with that population and the size of that number]




I have spent quite a lot of time thinking about coordination in general. Indeed, knowledge is a vital part of it. The problem that I see is that knowledge is too vague and lossy and changing and incomplete [as I mentioned in this comment https://news.ycombinator.com/item?id=26203718].

An hypothetical solution would be a system that talked a language similar to plain english, but that was determinist. You let people write their problems and views to the system, and the system determines which are the widest consensus available within a given scope and what are the highest priority problems (perceived by people). This has a lot of problems, but it's a good way to think about the topic. Even with such a system, would you really be solving the problems you want to solve?

If it does, then this is basically symbolic AI. You can try to relax requirements... but you kinda need an "automatic coordinator". If you go with a manual coordinator instead, then I doubt you will be able to scale anything that's not extremely rigid and hierarchical, at which point you are re-introducing many of the same problems you were trying to fight in the first place.


A combination of "all categories are fuzzy" and "all models are wrong but some are useful"? I too doubt the effectiveness of a symbolic AI approach. Although I studied that and other approaches in the field, you may note that my background is in biologically plausible methods for pursuing artificial intelligence.

I think the direct human input method is given too much focus although it and related interactions have their place. The fallible sensors directly reporting readings from reality already has sufficient noise related issues. I suspect more richly informing people will yield better results.

I am inspired by stories such as the fish farm pollution problem [0]. Consider how a reality based game theoretic analysis of agent choices might guide your selection of future work mates (or lakes) and facilitate a different friction in finding your next contribution to the world.

[0] search "3. the fish" on https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...


I find your comment quite confusing.

>> A combination of "all categories are fuzzy" and "all models are wrong but some are useful"?

Are you talking about my first paragraph or symbolic AI?

>> The fallible sensors directly reporting readings from reality already has sufficient noise related issues.

I assume here you are trying to say that human input is not reliable.

I don't understand what's your approach with AI here. You seem to want to use it to better inform people? How? You are going to say that human input is not reliable, but then train an AI that can't explain itself and expect people to take its advice? Either noise can be palliated at scale in both places or none.

Finally, I'm very familiar with meditations on moloch. But you seem to be betting on an "education-based" solution, which doesn't fit very well with the scenario that meditations on moloch exposes, which is not that some people couldn't make better choices (for society, the collective), but rather that the "questionable" choices of a few can deeply compromise the game for everyone else. I mean, we all probably agree that it would be great to educate people on these concepts, but I doubt that will be enough to stop the dynamics that cause it.


I apologize for the unintended confusion. I don't find all expression safe in this context and have avoided some of it as well as the amount of work I could put into describing what amounts to a ~36 year life obsession for me.

> Are you talking about my first paragraph or symbolic AI?

In the link you provided and the second paragraph of your first reply you seem, to my reading, to suggest using a system to facilitate discovering agreement on specific actions, knowledge, and tactical choices. Stated differently agreement within groups, perhaps large groups. You discussed in both comments the challenge of being specific and static, which is the downfall, in my opinion, to many symbolic systems - the presumption that our ability to discretely describe reality is sufficient. To me fuzzy categories and useful broken models comment about that finding. The systems you are describing sound useful but seem to solve a different problem than I mean to target.

> I assume here you are trying to say that human input is not reliable.

Yes, I find human output to be unreliable and I believe it is well understood to be so. An example of a system that has elements of scaling social knowing is Facebook. I believe it is well understood that people often (and statically speaking prevalently) present a facsimile of themselves there when they are presenting anything actually more than superficially adjacent to themselves at all. This introduces varying amounts of noise in to the signal and displaces participation in life, perhaps in exchange for reduced communication overhead. Humans additionally make errors on the regular, whether through "fat fingers", an unexamined self, "bias", or whatever. See also "Nosedive" [0].

> I don't understand what's your approach with AI here

I haven't really described it - the ask was literally for the problem, not for solutions. There is a certain level of vaporware in my latest notion for exactly how to solve it. As stated obliquely however, there are aspects of the solution that I don't really want to be dragged through a discussion on here on HN.

> an AI that can't explain itself

I haven't specified unexplainable AI. I actually see evidence based explainability as a key feature of my current best formulation of a concrete solution. That, in context presents quite a few nuts to crack.

> Finally, I'm very familiar with meditations on moloch

I only meant to link the fish story but the link in MoM was broken and I failed to find a backup on archive.org, not putting a whole ton of effort into looking.

Consider how the described "games" change if those willing to cooperate and achieve the maximal outcomes could preselect to only play with those who are inclined to act similarly? If you grouped the defectors and cooperators to play within their chosen strategies based on prior action? Iterated games have different solutions and I find those indicative of life, except that social accountability doesn't scale. In real life such specificity is impossible and no guarantees exist. Yet, I believe that the rights systemic support structures could solve a number of problems, including a small movement of the needle towards greater game theoretic affinity and thereby a shift in the local maxima to which we have access.

[0] https://en.wikipedia.org/wiki/Nosedive_(Black_Mirror)


Thanks, that was much clearer. Well, there are indeed many options and paths we could take in the space, so good luck with whatever you end up trying. Only one final note: I'm a very secretive person myself, and even beyond that I understand your reticence to share more details about some of your specific ideas... but I think that sharing more openly would align better with that shift in the local maxima you aspire to achieve. For example, I'm sure at least some of us would be interested in reading a submission or blog post about many of these ideas.


The question is too far up in fuzzy space. Narrow down to several use cases and specific problems within those and search field will be more manageable. Examples: Social workers want to be able to handle more cases appropriately. How many cases can they handle without diminishing quality scores. Politicians want to appear caring to the needs of as many constituents as possible. How do they group needs into buckets to find what is most relevant. Find the overlap and dig into it with more cases and then questions.


Like automating the analysis of a recorded argument according to Gottman institute and other social heuristics for augmentation of marriage counseling services?

[edit: i.e. count positive and negative sentiment statements assigned to speakers and compare the per speaker ratio to the experimentally determined minimum "healthy" ratios not yet replicated]

You're right that there needs to be a tractable starting place. This is not lost on me. I may have used a flexible definition of "close to solving" but one's interpretation also fits into the scope of the effort. I'm at least 10% into it! ;P




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: