Hacker News new | past | comments | ask | show | jobs | submit | more vid's comments login

I don't understand how they can roll out this feature across all regions. Call recording laws are different in many jurisdictions, and this is like a superset of call recording. I think it's a useful feature for a person, but to collect info on other people (in calls, emails, etc that appear on screen) and be able to process it is like an evil superpower.


They are not "light," though.


1.62kg pro vs 1.24kg air? For me that's the same class.


A third heavier, and it really feels heavy for its size. It's one of the things that made me decide not to get a Mac. I don't know why they use so much aluminum.


They are super sturdy. It's a single piece of metal that is carved into shape. And aluminium is also very easy to recycle.

I personally preferred the design of the first gen retina Macbook Pro. It felt so sleek and thin. The current design is a bit too chunky and boxy for my taste.


If we're talking æsthetics, I find they all look like silver blimps (glad they're finally shipping darker tones, but it's an affectation). I've seen a lot of dented Macbooks. I don't think they're any sturdier than any other well built laptop, which last until they're obsolete (5 - 10 years). The alu does act as a heat sink, but I doubt it's necessary for the entire body.

I'm sticking with Thinkpads for now, I like the function-dictates brutality of their design and I think carbon/magnesium and some plastic is a good approach for a much lighter result. A Macbook Pro is not only heavy, but slippery. A lot of carbon is used recycling and even shipping aluminum, and the decreasing factor of being able to easily upgrade/repair components. I don't know ultimately what the environmental impact is between the materials, but Apple has advantages of consistency and scale; as much as I like Thinkpads, I don't like that there are a dozen different models each year, which would be impossible to effectively recycle even if they had a program in place.

We're pretty much into a blog here (here's a picture of a beach), but I tried a Macbook, had to return it because of ergonomic factors including weight; I got a pretty great 16" Thinkpad with the same weight as the 14" MBP, I don't even want to think about the weight of the 16" MBP. It's frustrating other top tier companies or the industry can't find a way to have efficient product cycles (Framework is getting there). I guess it doesn't help that Apple has patented their unibody design, which shows how much they care about environment in the larger sense.


I'm glad you found something you enjoy :)


Whenever I start looking into a device like this, I'm reminded how much progress has been held back by the grip Amazon has on the book world. Building on shared book annotations would be a great way to develop intelligence, but it can only be on the down-low.


I don't understand why it's even legal; it's akin to call recording, which is illegal in many areas. Basically I think it's a great idea for people to AI-process their own information, but when it comes to conversations, it's highly problematic.


Yes. It's not really "open" if it depends on a non-libre service. To be legit, they must at least enable this experimentally.


Usually this is a sign of a project that isn't ready for a lot of attention. It's very brave and a little bit stupid to offer Open Source to the world (especially without a sustainability plan), if you put a lot of effort into 'marketing' at an early stage you've probably just overpromised to people not focused on that domain, won't be able to deliver, and will burn out, and it will become yet another abandoned project.

Perhaps in a few years, as the project picks up uses and contributors, it will get its own website and support network and be easy to understand and use by "anyone."


I can’t tell if your comment was meant to be ironic but OSM has been around for 20 years, and its data is used by billions of people.


The comment was about https://github.com/markuman/sms, not OSM.


I don't think this project is looking for a lot of attention.

It is more like a sample of cool stuff you can do.


I agree, but still 1-2 paragraph about why this was made, what is for laymans, and what it is in terms of OSM ecosystem. Possibly my extrovert self makes it natural to write similar intros to repos I make public. (I even used to blog!)


My impression is larger or more synthetic orgs don't want devs to lead things anymore, they want a very top down development of vision -> product -> ux -> project management, and then the developers scurry around and do whatever it is they do using cloud something something, hopefully in as commodity a way as possible. The article clearly states the problems with this; devs can no longer understand the whole picture or make important top level suggestions, and the cloud dictates what can be done.


$0, unless maybe someone finds a way to not pillage the data of the service and add a lot of actual value, and doesn't squat on this ultimately simple thing that should be a small piece of a much larger thing.

open source AI and browser/OS companies are going to provide an answer soon anyway.


You don't seem willing to share how you did anything, you only draw attention to your works. In the reddit thread, several people asked about your 'talk like a pirate' training, and you never responded. In this thread, you imply you'll talk about how you used this visualization in your training, yet you never do.


I’ve gone into pretty great detail on the visualization in the README of my repo. The main utility is detecting individual layers being overfit.

There are some specifics about OpenPirate that I’m not at liberty to share at the moment, but those are unrelated to this visualization. I’ve published the model weights under a permissive license, and I hope to publish more of the training code in the future.

If you have any questions about how to use the code in my neural flow repo just ask.


OK, sorry if I missed that then, but perhaps a direct link here or there would help since a number of people asked the same thing. I followed a link to your huggingface page on reddit, and there the obvious README doesn't talk about specifics[1].

1. https://huggingface.co/valine/OpenPirate/blob/main/README.md


Yeah I apologize, a lot of the information is scattered across threads right now. I should have spent more time compiling everything in one place.

This comment chain in particular might have some of what you’re looking for:

https://www.reddit.com/r/LocalLLaMA/comments/1ap8mxh/comment...

Other relevant threads to put it all in one place:

https://www.reddit.com/r/LocalLLaMA/comments/198x01d/openpir...

https://www.reddit.com/r/LocalLLaMA/comments/19a5hdx/morehum...

https://www.reddit.com/r/LocalLLaMA/comments/1apz94o/neuralf...

https://github.com/valine/NeuralFlow/blob/master/README.md

The one thing I don’t talk about is the specifics of the instruction generalization which unfortunately I’m not able to share, even though I very much want to.


I don't think you should be apologising, there is always room for improvement. Nice work!


Super interesting project. I like its focus. Wondering if the author looked into Cozodb, or other databases that combine vector + graph/triples. Since probably neuro-symbolic is the best path. https://docs.cozodb.org/en/latest/releases/v0.6.html talks about this idea.


Interesting. Thanks for sharing will take a look!


Extremely interesting read, thanks for sharing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: