Hacker News new | past | comments | ask | show | jobs | submit | dlojudice's comments login

The link for the docs: https://ai.google.dev/gemini-api/docs/thinking-mode

"Thinking Mode is an experimental model and has the following limitations:

32k token input limit Text and image input only 8k token output limit Text only output No built-in tool usage like Search or code execution"


dumb terminal is back, now cloud oriented built by Microsoft


"Gemma Scope is a research tool for analyzing and understanding the inner workings of the Gemma 2 generative AI models. The tool allows you to examine the behavior of individual AI model layers of Gemma 2 models, while the model is processing requests. Researchers can apply this technique to examine and help address critical concerns such as hallucinations, biases, and manipulation, ultimately leading to safer and more trustworthy AI systems." [1]

Would it be a stretch to say that this type of output is the "abstraction" mode of a model? In other words, linking the semantics of a text or word to more abstract concepts (e.g.: cat -> animal, beans -> food).

For example, the capacity for abstraction is fundamental for scientific and creative development.

[1] https://ai.google.dev/gemma/docs/gemma_scope


Interesting idea for those who are struggling with prompts. In my case, I hope to achieve through prompts something that I am only able to achieve through more advanced models. Let's see!


I've been doing this for over 20 years, but I confess that I've never gone back to look at the tickets. Maybe one day, out of nostalgia, I'll look at the dusty box full of old tickets and have good memories of shows, movies, good and bad. For now, what matters is the feeling that part of my life's memory is being preserved for something that I don't even know what exactly it is...


Some tickets were awesome. I’m in France and I still remember getting JJ Goldman (an artist)’s physical red star ticket for his communist tour, or the photo negative film physical artifact, he always made sure the physical artifacts were never a simple printed piece of paper.


Finally. What else is missing for Anthropic's end-user UI to be on par with ChatGPT? I have the impression that not much. Congrats!!


Voice, image generation and that's about it.

Anthropic is a million times better overall, IMO.


Anthropic's prompt lenght is still too short compared to GPT4o


Pat Gelsinger and Lisa Su interview:

https://www.youtube.com/watch?v=7y32wpDhIGM


God, Su is barely intelligible. Her noise gate is turned up to hell.


Awesome improvements, but compared to Claude Artifacts, it lacks the html/js "Preview" where you can run the code and check/validate the result without leaving the browser. This is a killer feature


preview and publish. Where you can share a link to a functioning version of the artifact.


Great video! I didn't know this paper by Peter Naur [1]

[1] http://pages.cs.wisc.edu/~remzi/Naur.pdf


I'm not a data scientist myself, but as someone doing a master's in complexity and grappling with a lot of missing data, this article on multiple imputation really caught my attention


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: