This is a very bad article, probably to attract traffic to their PM course.
For me is very simple, there are AI Product Managers. But those are not PMS that use AI tools but rather those that manage AI products. And there is a difference between Software 1.0 and Software 2.0 (AI or ML model based) products. These products are not managed as engineering but science. These products are managed with experiments. These products are non-deterministic. These products have virtually infinite inputs and outputs. These products do not address one problem but many at the same time. So, if you ask me of course there is a thing such as AI Product Manager but not what the author, and many other PMs, think.
I think the only proven use cases for ZUIs have been Maps, Calendars, and Photo apps (Apple Photos and Google Photos). The last two to switch between different time period views: Day->Month->Year for photos and Day>Week>Month for calendars.
There is maybe some room for design too (Figma), but there are many times I wish it DIDN'T do that and encouraged better frame/file organization instead.
Similarly, it's too often used for (mis)organization, like in Miro or Prezi or "mind map" apps. I feel like those try to shoehorn information into Sherlock-like "mind palaces" in a way that only makes sense for the creator but are inscrutable to everyone else and just makes information harder to find later on. They always lead to some sort of pixel-hunting where the presenter zooms out and in, out and in, out and in, wasting time on navigation and placefinding instead of information dissemination.
I think the overall point is some types of hierarchical information naturally lend themselves to "drill down" type UIs more than others. When you have levels of detail you don't need to see at first glance, drilling/zooming is awesome. When you have a bunch of things of equal hierarchy, presenting them in a big flat pile isn't any better in the virtual world than in the real world... it's the digital equivalent of 10,000 sticky notes on a wall.
>Ben Shneiderman was inspired by the 1960's 'Op Art' and the exhibits that he came across at the Museum of Modern Art in New York. Op Art or Optical art is a form of kinetic art related to geometric designs that create movement in the eyes.
>This site features draft designs and full views of the Treemap Art project. View more about the exhibitions.
>By Ben Shneiderman
>Although I conceived treemaps for purely functional purposes (understanding the allocation of space on a hard drive), I was always aware that there were appealing aesthetic aspects to treemaps. Maybe my experiences with OP-ART movements of the 60s & 70s gave me the idea that a treemap might become a work of art. That idea was revived in 2013 by way of my contacts with Manuel Lima who produced a beautiful coffee-table book on the history of trees that has several chapters on treemaps and their variations.
>I believe that there are at least four aesthetic aspects of treemaps:
>1. layout design (slice-and-dice, squarified, ordered, strip, etc.),
>2. color palette (muted, bold, sequential, divergent, rainbow, etc.), and,
>3. aspect ratio of the entire image (square, golden ratio, wide, tall, etc.).
>4. prominence of borders for each region, each hierarchy level, and the surrounding box
Ben Shneiderman: Every AlgoRiThm has ART in it: Treemap Art Project:
>Ben Shneiderman, distinguished university professor, University of Maryland, College Park and National Academy of Engineering member, spoke at the October 16, 2014 DC Art Science Evening Rendezvous (DASER). Ben Shneiderman described the invention of treemaps and showed examples of its usage. He then turned to the aesthetics of treemaps, which led him to create the “Every AlgoRiThm has ART in it: Treemap Art Project” exhibit on view in the Keck Center first floor galleries (www.cpnas.org). He demonstrated how users of the free treemap application can generate their own artworks, without programming.
The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines — Don Hopkins — October 1989 (a paper I wrote when I worked with Ben Shneiderman at his University of Maryland Human Computer Interaction Lab):
>Darkness fell in from every side, a sphere of singing black, pressure on the extended crystal nerves of the universe of data he had nearly become… And when he was nothing, compressed at the heart of all that dark, there came a point where the dark could be no more, and something tore. The Kuang program spurted from tarnished cloud, Case’s consciousness divided like beads of mercury, arcing above an endless beach the color of the dark silver clouds. His vision was spherical, as though a single retina lined the inner surface of a globe that contained all things, if all things could be counted.
>[Gibson, Neuromancer]
>The Pseudo Scientific Visualizer is the object browser for the other half of your brain, a fish-eye lens for the macroscopic examination of data. It can display arbitrarily large, arbitrarily deep structures, in a fixed amount of space. It shows form, texture, density, depth, fan out, and complexity.
We had a system at an old job that someone had dubbed WOPR for no other reason than they liked the movie. A colleague and I wrote a similar system that filled in some gaps and decided that the best name for it was "Big Mac" to delightfully mix references. First thing I thought of.
Not an expert, but I've seen the following commented on other test flight videos: on initial test flights, landing gear is always kept down to minimize risk. If a sudden landing is needed, gear is already down, no risk of equipment getting stuck, less mental load for the pilot to perform emergency landing, etc. Basically, when testing, you want to minimize the variables being tested. When airworthyness is validated, then you can test landing gear systems.
I would love to see Perplexity.ai in the benchmark. It has completely replaced Google/DDG for information questions for me. I still use DDG when I want to do a navigational query (e.g. find the URL for a blog i partially recall the name).
While kagi was the product that most brought me joy in 2022, perplexity.ai has been the one for 2023, even though i only recently started using it. It's just been a joy to be able to iteratively discuss most of my searches.
I've been really enjoying Perplexity as well. It's a much better Internet/search focused experience than ChatGPT, Bing, or Bard. For anyone interested, until the new year (~20 more hours?) there's a code for 2mo free Pro: https://twitter.com/perplexity_ai/status/1738255102191022359 (more file uploads, choose your model including GPT4)
Very nice hack! I did a very similar project integrating ChatGPT bot but using WhatsApp business account instead of fake facebook contact. I got my account blocked when Meta discovered I'm not a business unfortunately. I'll retake the project with the FB account, it seems much easier.
No, the reason macs are better on LLMs is memory bandwidth 800Gb/s on Ultra 2 . I couldn’t find a good source but it seems that Ally mem bandwidth is around 70GB/s
A combination of high memory bandwidth and large memory capacity is necessary for good performance on LLMs. Plenty of consumer GPUs have great memory bandwidth but not enough capacity for the good LLMs. AMD's Phoenix has a memory bus too narrow to enable GPU-like bandwidth, and when paired with the faster memory it supports (LPDDR5 rather than DDR5) it won't offer much more memory capacity than consumer GPUs.
A mini PC with that chip, 1 TB of storage and 64GB of ram (both replaceable) costs like 800€ and fits behind your monitor. Getting that much memory in a consumer GPU is definitely quite a bit more expensive. Also, for comparison an M2 Ultra with that amount of storage and ram is 4800€.
So I am not doubting that a 6 times as expensive computer is probably "better" by some metric, but for that drastic difference I am not sure that is enough.
While I 100% agree on the price comparison, you’ll need to reach some threshold for LLM performance to consider them as usable. As Someone not very knowledgeable at the topic, the pure difference in the numbers lead me to question if you could even reach that usable performance threshold with the 800€ mini PC
Note that when referring to memory capacity, I specified LPDDR5, because that's the fastest memory option. If you want to go with 64GB of replaceable DDR5, you'll sacrifice at least 18% of the memory bandwidth. (And in theory the SoC supports LPDDR5-7500, but I'm not aware of anyone shipping it with faster than LPDDR5-6400 yet.) So you could get to 64GB on the memory capacity with a Phoenix SoC, but only by being at a 10x disadvantage on bandwidth relative to an M2 Ultra—which doesn't make a 6x price difference sound outrageous, given that we're discussing workloads that actually benefit from ample memory bandwidth.
I’ve observed the same pattern in corporate world with folks which are close to promotion stage. Nice people suddenly become bitter and very hard to work with. Only the best stay nice.
And leases, something very people know is that Van Moof has a great corporate leasing program. All big tech offer eBike commuting options paying Van Moof Leases.
I think it is genius, the bikes are expensive to buy, but $100 a month lease including maintenance is very popular among Big tech (especially if your company pays for it).
They could very well be losing substantial money on that. Leases only work if the quality of what you lease is high enough that you won't be bleeding to death on the availability guarantee. Financial lease insulates the lease provider from that but if it is operational lease then whoever inherits the leases might be in for a very hard time.
100% Agree and unfortunately I don't have numbers. However, anecdotally I work for a big tech and I have one of the leases, and it is a very popular option across my colleagues. I'd probably never buy one, but with the lease I'm very happy customer. Also they gave the option to upgrade to newer models for $10 more dollars a month after a year which I think it helps them to fix operational issues. There is also in-campus maintenance which is very convenient. I think that if they operate/scale the lease business properly it could make a very good business in its own.
There are multiple of these lease suppliers and then there is Van Moof themselves, I take it your deal is with Van Moof directly. That should - assuming the new owners have actually bought the lease contracts as well - give you a priority position for spare parts. Here is the FAQ of another such provider (there is also Athlon, possibly others):
Certainly a concern. So for Backblaze I use an email address on my own domain which is then forwarded (thanks to Cloudflare; previously Gandi) to another email service which is currently iCloud. I did this after hearing about horror stories of loosing email access with Google a few years ago as you mentioned. I’m running on the assumption that Backblaze, Cloudflare and Apple will not all try to screw me.
I’ll say for me the most important data is family photos which I’ve got them duplicated in Amazon Photos. B2 is the hedge against Amazon, and Amazon is the hedge against B2.
For other documents, I’m willing to trust b2 for now. They have not given me a reason not to trust them. One benefit of reading HN daily is I’d hope to see red flags posted on HN if b2 is having problems. Maybe I’ll consider having an alternative. I also don’t want my data floating across multiple vendors even if encrypted.
reply