It seems Claude (3.5 Sonnet) provided the longest summary for this discussion using basic single shot prompt for me:
After reviewing the Hacker News thread, here are some of the main repeating patterns I observed:
* Fatigue and frustration with AI hype: Many commenters expressed being tired of the constant AI hype and its application to every domain.
* Concerns about AI-generated content quality: There were recurring worries about AI producing low-quality, generic, or "soulless" content across various fields.
* Debate over AI's impact on jobs and creativity: Some argued AI would displace workers, while others felt it was just another tool that wouldn't replace human creativity and expertise.
* Skepticism about AI capabilities: Several commenters felt the current AI systems were overhyped and not as capable as claimed.
* Copyright and ethical concerns: Many raised issues about AI training on copyrighted material without permission or compensation.
* Polarized views on AI's future impact: There was a split between those excited about AI's potential and those worried about its negative effects.
* Comparisons to previous tech hypes: Some likened the AI boom to past technology bubbles like cryptocurrency or blockchain.
* Debate over regulation: Discussion on whether and how AI should be regulated.
* Concerns about AI's environmental impact: Mentions of AI's large carbon footprint.
* Meta-discussion about HN itself: Comments about how the discourse on HN has changed over time, particularly regarding AI.
* Capitalism critique: Some framed issues with AI as symptoms of larger problems with capitalism.
* Calls for embracing vs rejecting AI: A divide between those advocating for adopting AI tools and those preferring to avoid them.
These patterns reflect a community grappling with the rapid advancement and widespread adoption of AI technologies, showcasing a range of perspectives from enthusiasm to deep skepticism.
In short: wordpress.org is just a personal homepage of Matt Mullenweg, not legally or financially related to neither Foundation nor Automattic? There could be technical relations but who has not "forgotten" your personal homepage to your employee's machine, just a honest mistake.
The problem of semantic models what I've seen in tools like Looker, Tableau and Qlik (very probably same for PowerBI) is that they are tightly coupled to the tool itself, work within them only. Now you want "modern data system" then you want them decoupled and implemented with an open semantic model which is then accessable by data consumers in Google spreadsheets, Jupyter notebooks and whatever BI/Analytics/reporting tools your stakeholder uses or prefers.
There are very new solutions for this like dbt semantic models; their only issue is that they tend to be so fresh that bigger orgs (where they do make most sense) may be shy on implementing them yet.
To the original topic - not sure how much PG17 can be used in these stacks, usually much better are analytical databases - BigQuery, Snowflake, maybe Redshift, future (Mother)Duck(db)
The semantic model in Power BI is not tightly coupled to the tool. It is an SSAS Tabular model. It is pretty trivial to migrate a Power BI model to Analysis Services (Microsoft's server component for semantic models, hostable on-prem or as a cloud offering).
Both Power BI and Analysis Services are is accessible via XMLA. XMLA is an old standard, like SOAP old, much older than dbt.
XMLA provides a standard interface to OLAP data and has been adopted by other vendors in the OLAP space, including SAP and SAS as founding members. Mondrian stands out in my mind as an open source tool which also allows clients to connect via XMLA.
From what I can see, dbt only supports a handful of clients and has a home-grown API. While you may argue that dbt's API is more modern and easier to write a client for (and I'd probably agree with you! XMLA is a protocol, not a REST API), the point of a standard is that clients do not have to implement support for individual tools.
And of course, if you want API-based access there is a single API for a hosted Power BI semantic model to execute arbitrary queries (not XMLA), though its rate limits leave something to be desired: https://learn.microsoft.com/en-us/rest/api/power-bi/datasets...
Note the limit on "one table" there means one resultset. The resultset can be derived from a single arbitrary query that includes data from multiple tables in the model, and presented as a single tabular result.
Note: XMLA is an old standard, and so many libraries implementing support are old. It never took off like JDBC or ODBC did. I'm not trying to oversell it. You'd probably have to implement a fair bit of support yourself if you wanted a client tool to use such a library. Nevertheless it is a protocol that offers a uniform mechanism for accessing dimensional, OLAP data from multiple semantic model providers.
With regard to using PG17, the Tabular model (as mentioned above, shared across Power BI and Analysis Services) can operate in a caching/import mode or in a passthrough mode (aka DirectQuery).
When importing, it supports a huge array of sources, including relational databases (anything with an ODBC driver), file sources, and APIs. In addition to HTTP methods, Microsoft also has a huge library of pre-built connectors that wrap APIs from lots of SaaS products, such that users do not even need to know what an API is: these connectors prompt for credentials and allow users to see whatever data is exposed by the SaaS that they have permissions to. Import supports Postgres
When using DirectQuery, no data is persisted by the semantic model, instead queries are generated on the fly and passed to the backing data store. This can be configured with SSO to allow the database security roles to control what data individual users can see, or it can be configured with security roles at the semantic model layer (such that end users need no specific DB permissions). DirectQuery supports Postgres.
With regard to security, the Tabular model supports static and dynamic low-level and object-level security. Objects may be tables, columns, or individual measures. This is supported for both import and DirectQuery models.
With regard to aggregation, dbt seems to offer sum, min, max, distinct count, median, average, percentile, and boolean sum. Or you can embed a snippet of SQL that must be specific to the source you're connecting to.
The Tabular model offers a full query language, DAX, that was designed from the ground up for expressing business logic in analytical queries and large aggregations. Again, you may argue that another query language is a bad idea, and I'm sympathetic to that: I always advise people to write as little DAX as possible and to avoid complex logic if they can. Nevertheless, it allows a uniform interface to data regardless of source, and it allows much more expressivity than dbt, from what I can see. I'll also note that it seems to have hit a sweet spot, based on the rapid growth of Power BI and the huge number of people who are not programmers by trade writing DAX to achieve business goals.
There are plenty of pain points to the Tabular model as well. I do not intend to paint a rosy picture, but I have strived to address the claims made in a way that makes a case for why I disagree with the characterization of the model being tightly coupled to the reporting layer, the model being closed, and the model being limited.
Side note: the tight coupling in Power BI goes the other way. The viz layer can only interact with a Tabular model. The model is so broad because the viz tool is so constrained.
Thanks for the writeup. There are indeed use cases, especially in the MS multiverse. Proof of the none->basic->complex “can do everything” (soap,xml,rpc)->radically simpler “do what really matters” (rest, json, markdown) path. I’m not really sure if dbt semantic layer is the final open “standard” for the future analytical models and metrics, it has own questionmarks, it is literally just a transformer with metrics as addon and there are just initial implementations, but today I’d rather give that thing a try. Simpler is so much better
In nutiteq mobile maps SDK (later Carto, now abandonware) we used specifically compressed bitmap to represent 'water' and 'empty land' tilemasks to cover these two special cases. We provided planet-scale mobile embedded mbtiles package in 30GB if I remember well. This tile mask (quite instant bitmap index) concept should work well for server case also.
I'm not sure if this is a joke or not, but there's no actual graphics in this besides a yellow ball and green rectangles, while the OP game has actual textured pipes, a textured floor, a background scene. The score counter looks to use a basic system or browser font, while the OP game has a custom font. The OP game has also high score tracking, sharing, a proper main screen instead of just dropping you into gameplay.
It's a haha-only-serious joke about using a prompt to write it. Sort of impressive it's even that far along, but OP's effort is far more interesting to me
Yep it is not to OP really. Just observation how programming numbers works nowadays: I had my first useful miniprogram using about 3-4 cryptic bytes about 33 years ago, now you can make a playable game with <200 chars of human language. It is of course “compiled” to megabytes with a tera+byte scale interpreter, but meaningful source is in human scale (again).
That was a 20 megabyte page load for me even with all the ad cruft blocked...to load a game in a browser, which is already the most batteries-included application platform I can think of.
Sweden is building now new prison for underage and lacks thousands of seats for inmates. Other countries are talking about renting their spare capacities to them.
reply