Hacker News new | past | comments | ask | show | jobs | submit | jorgemf's comments login

cost is relative. how much would it cost for a human to read and give you an answer for 200k tokens? Probably much more than $3.


You are not going to take the expensive human out of the loop where downside risk is high. You are likely to take the human out of the loop only in low risk low cost operations to begin with. For those use cases, these models are quite expensive.


Yeah, but the human tends not to get morally indignant because my question involves killing a process to save resources.


My problem with this analysis is ignoring the fact of who is using which computer. So far new people in the company get the M3, while old people have M2, and the people who has been the longest time in the company have an M1. Who is going to work on more critical tasks with more changes in the code? who is going to work mostly in easy bugs until they get some experience with the code in the company? I bet you if you give both populations the same computer the compiling times are going to be faster for the new people. For me the analysis doesn't have enough dimensions, it should take into account the time since the person was hired in the company and the seniority. I would also have added more type of graphs (boxplots seems a better way to compare the information), and also I would have measure the total % of CPU usage. The battery/AC analysis gave me the impression that M3 might be underutilized and that it is going to be impossible to get lower compiling times without faster single core speeds (which might be a relevant information for the future).


I think kotlin is one example. It uses the same idea but it uses powers of 10 for incremental fixes and numbers for 1 to 9 for hotfixes. That's if for the 3rd number, I do not know what will happen when the second number reaches 2 digits. I guess they will do something to make it comparable again.


You are assuming that the whole existence of humanity is to work? because, without working, they would be sloths? What about expending more time having healthy habits like working out, meeting more often with family and friends, discovering the world, learning new stuff? So retired people are just sloths?


I’m more worried about people not being able to feed themselves because their labor became worthless. They will effectively be frozen out of the economy as they have nothing to trade with.


If AI does everything, the economic won't make sense anymore. Maybe there would be a basic rent or just anyone will ask for what they want and AI will provide it.

We though AI would replace the low level jobs first, but it seems creative jobs are gone first (art, software developers, etc). Bear that in mind.


No matter who gets replaced first, someone is getting screwed.

Frankly, if it's the higher end jobs getting replaced first that would likely spill over to the lower end ones as the those people who lost their jobs resort to taking lower end work to survive, flooding the market.


That's under the assumption that nothing else will change. But it is not the case, the system would have to adapt. One possibility is that we wont use money anymore, and there are a lot of in betweens in the middle. But for sure what you cannot do is to stop the change that is coming.


How to adapt the system is the big question. In the worst case the system doesn’t adapt and lots of people are plunged into poverty.


So you think that in a world of AGI, humans will have all our needs met by machines?

Or do you think there will always be space for humans to provide value to other humans, even if machines surpass us in intelligence?


Frankly, I think eventually machines will do it all. I see AGI as the universal automation that can do everything a human can - apart from “being human”.


Well, that's it, isn't it?

Even if AI can do everything, you'd still want the authentic human experience.

99% of people are far smarter than horses but people still pay to ride horses.

I don't see why someone wouldn't want to pay to... ride me... uh, in a matter of speaking, of course (of course). I mean, look at me.


> 99% of people are far smarter than horses but people still pay to ride horses.

How many working horses are there today vs before automobiles?

Heck, how many domestic horses are there today vs before automobiles?


> How many working horses are there today vs before automobiles?

Well, exactly. The less working horses there are, the more expensive and exclusive it would be to ride them.

There could be 1 trillion automobiles and I bet you, none of these automobiles would compare to riding a real, live horse.

Similarly, there could be 1 trillion AI robots, they could do everything better than a human, and yet I bet you'd still want to ride (or otherwise experience) a real, live human.

My point is that if automobiles were always better than horses in every way, then nobody would want horses. But even today, with the amazing automobiles that we have, some of which even faster and more reliable than most horses, it's clear that we still want horses.

My question is, if horses were as intelligent as us and they could have their basic needs met extremely cheaply, would they be willing to work at all, apart from the occasional ride? Because the horse labor pool would shrink immensely if they didn't really want to work.


> Well, exactly. The less working horses there are, the more expensive and exclusive it would be to ride them.

So you expect there to be less of us?

Imagine everyone who is selling labor today out of a job.


> > Well, exactly. The less working horses there are, the more expensive and exclusive it would be to ride them.

> So you expect there to be less of us?

Less working people, not less people.

> Imagine everyone who is selling labor today out of a job.

If it's because they don't need it anymore, that's great!


> Less working people, not less people.

What are the people not work going to eat?


If they get hungry enough, the working ones.


Time is the forth dimension. The input data is a video, so the model learns the colors and the position of the elements (basically points). You can rende the scene from any angle at any time once the models is trained


SEEKING WORK | Spain | Remote (EU and US time zones)

  Technologies: TensorFlow, Pytorch, Deep learning, LLM, diffusion models, GANs
  Résumé/CV: http://jorgemf.github.io/cv.pdf
  Personal website: http://jorgemf.github.io
  Email: (in the CV)


  Location: Spain
  Remote: Yes
  Willing to relocate: Potentially
  Technologies: TensorFlow, Pytorch, Deep learning, LLM, diffusion models, GANs
  Résumé/CV: http://jorgemf.github.io/cv.pdf
  Personal website: http://jorgemf.github.io
  Email: (in the CV)


The main reason is that you might not want the raw information but some reasoning above. LLM is not only the context but all the information it has been trained with. For example a math student is making a question, it doesn't want the raw theorems but some reasoning with them, and currently LLM can do that. It will make mistakes sometimes because of hallucinations, but for not very difficult questions it usually gives you the right answer. And that helps a lot when you are not an expert in the domain. And that is the reason GPT4 is a great tool for students, it helps you to understand the basics as if you have a teacher with you.


I think your argument is similar to the one we had with the calculators and later with Internet. I think ChatGPT is another tool. For sure there is going to be lazy people who use it and won't learn anything, but it also sure it is going to be a boost for so many people. We will adapt.


Calculators solve problems that have exactly one correct answer. You cannot plagiarize a calculator. They are easy to incorporate into a math curriculum while ensuring that it stays educationally valuable to the students.

LLM's, the internet, even physical books all tend to deal primarily with subjective matters that can be plagiarized. They're not fundamentally different from each other; the more advanced technologies like search engines or LLM's simply make it easier to find relevant content that can be copied. They actually remove the need for students to think for themselves in a way calculators never did. LLM's just make it so easy to commit plagiarism that the system is starting to break down. Plagiarism was always a problem, but it used to be rare enough that the education system could sort-of tolerate it.


I argue that calculators are overtly harmful to arithmetic prowess. In summary, they atrophy mental arithmetic ability and discourage practice of basic skills.

It pains me (though that's my problem) to see people pull out a calculator (worse, a phone) to solve e.g., a multiplication of two single digit numbers.


Sure, calculators made people worse at mental arithmetic, but arithmetic is mechanical. It's helpful sometimes, but it's not intellectually stimulating and it doesn't require much intelligence. Mathematicians don't give a shit about arithmetic. They're busy thinking about much more important things.

Synthesizing an original thesis, like what people are supposed to do in writing essays, is totally different. It's a fundamental life skill people will need in all sorts of contexts, and using an LLM to do it for you takes away your intellectual agency in a way that using a calculator doesn't.


Engineers care about arithmetic. Carpenters do too. Any number of other creative endeavors require (or, at least, are dramatically improved) by the ability to make basic calculations (even if approximate) quickly in your head.

Arithmetic is the "write one sentence" of composition. The ability to think through a series of calculations with real-world context and consequences is the 5-paragraph essay. If you're not competent with the basics, you won't be able to accomplish the more advanced skill. Being tied to a calculator (not merely using, but being unable to not use) takes away intellectual agency in the same way as an LLM-generated essay (though, I'll agree, to a lesser degree).


> Mathematicians don't give a shit about arithmetic

Sure, once you know how to multiply you don't care about it. But try learning first year CS math without being able to multiply without perfect command of the multiplication table


Exactly. My wife tutors kids at the high school who never mastered arithmetic and are trying to learn algebra. It's hopeless.


That was true before calculators too. Correlation, causation.


I'm not sure what you mean. These kids can't do arithmetic without a calculator. While it was possible to simply not learn arithmetic before calculators, it wasn't possible to hobble onward using the calculator as a crutch.


If those kids were truly applying themselves to the algebra, I think they'd quickly internalize arithmetic too as they used it. But whatever reason led those kids to not do arithmetic without a calculator could well be a reason they don't do well at more advanced math.


My point is failing to learn the basics is a huge hurdle to learning more advanced things. You posit that one could learn the basics and the advanced math at the same time. Maybe, but that would clearly be harder than doing them in order.

Fluency in arithmetic isn't something drilled into kids just to be obnoxious, it's foundational to almost all future math skills.


> They're busy thinking about much more important things.

Generally I agree (because the content of modern mathematics is largely abstract), but to nitpick a bit, number theory is part of mathematics too!

Ramanujan and Euler, for example, certainly cared a lot about 'arithmetic', and historically, many parts of mathematics have been just as 'empirical' in terms of calculating things as they've been based on abstract proof.


Two single digit numbers is indeed sad, but I pull out a calculator daily to do math I could have done in my head. I don’t feel that that is inherently bad.


Not exactly related, but your comment about plagiarism made me think of my days of writing papers and citing APA style. How do you cite a source if it came from ChatGPT and it likely doesn’t fully understand where it got its information?


You don't. You're only supposed to cite primary sources and peer-reviewed secondary sources. ChatGPT is a tertiary source, like dictionaries and encylopediae. You use tertiary sources to get a quick overview of a topic before you begin delving into primary and secondary sources, but you never include tertiary material in your paper.


Good to know. Thank you for the response!


It'll happy generate sources for you -- just be aware that most of the citations will be bogus. Not sure how many teachers/professors test the validity of citations.


I’m guessing they will have to start checking. Even if it’s just sampling a few for validity.


Facts can not be plagiarized.

Copyright protects specific expression, which reproducing is specifically a non goal with LLMs


Plagiarism and copyright violation are subtly different. Plagiarism is just presenting someone (or something) else's work as your own. It may or may not be a copyright violation.


This semester, I regularly conduct RFC / whitepaper / chapter reading sessions during my hours. I let students use perplexity.ai, bard, chatgpt to help them understand what they otherwise can't.

Once they're done, they submit a one-pager on 3 to 5 subtle / smaller / things they find the most interesting or counter intuitive. At the end of the semester, I intend to share all their one-pagers among all of their classmates and keep an open-book test on it. Let's see how that pans out.


I hope it is successful. I'm too old to be in primary education anymore, but I would have loved to have access to an LLM during that time that I can pester with an infinite amount of questions until I grok the subject matter


A calculator is an impressive single-function tool. LLMs and other forms of AI are multi-function problem solving tools. ChatGPT and other AI tools are closer to the introduction of the world wide web than they are to the invention of the calculator.


As far as I know you can preload the data in the GPU before processing it. What is the difference/advantage of what you are proposing?


You wouldn't need the hardware to support arbitrary load/stores. In particular, you could get rid of (some of) the address lines...

I'm unsure if this would be much of a win.


well, the data is way smaller than the model (at least the current trend), and you probably still need random access for the weights of the model. I am not sure if it a gain worth to pursue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: