Hacker News new | past | comments | ask | show | jobs | submit login

it isn't either-or. it is both.

if you view the data pipeline, you start with Data Mesh and individual teams create datasets that maintain the data for their team's domain. following that, you then get the data fabric which blends those domain specific datasets together automatically based off of the combination of a defined data model and the declared needs of consumers. a centralized team then owns those tables but not the business logic.

an example of this is Airbnb's data stack. You can read about it here: https://link.medium.com/qzqciW7Milb




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: