I really hope this technology becomes the future of political campaigning. The signage industry which prints billions of posters, plastic lawn signs, and banners for the post-election landfill needs to be disrupted.
These days I get a daily dose of amazement at what a small engineering team is able to accomplish.
This is already quite common with deepfakes of a politician's voice. While I agree on the potentially dystopian implications of this, it seems like it would be a huge improvement for a politician to put campaign funds into burning a little GPU time on answering specific questions from constituents (i.e. the LLM is reading their stated policy positions and simply delivering a tailored response), rather than wastefully plastering their name all over town.
Heh, I'm not even sure that would change much honestly. If I define a "lie" for the purpose of this post (and nothing else) as "a politician's claim they support a position during election season that they have manifestly not supported during their existing tenure as a politician", even cynical ol' me is a bit shocked by the amount of lying I've seen in this campaign. I'm not even talking about forward lying here about something they won't do for whatever reason once they get into office, I'm talking about their platform incorporating things that they were denouncing a year ago and vigorously voting against.
Thanks for these thoughts and compliments. I love the idea of preventing landfill with this tech. Our team is awesome and we really love our customers and all the jobs that can be done with this kind of tech!
This is a great idea. I have even hit this pain point when developing a healthcare app for hospitals that was primarily used in the United States. There are certain communities, even just within California, where it is common to have patients who only understand Spanish, Mandarin, or Japanese.
Any plans to extend this to iOS/Android development in the future? I assume it would already be easy to integrate this into React Native.
Also, is there a way for me to provide explicit additional context to the `t` function for the translation? Essentially a string that is appended to the LLM input for translation. For example, in Japanese there is often a significant difference between formal and informal language, and it is common to add post-positional particles such as や, が, and の to make titles and labels sound more natural. I see you have addressed many other special cases around numbers/dates/etc, so certain flags like formal/informal, regional dialect, etc may be valuable future additions.
Overall looks really nice and I look forward to trying this the next time the need arises.
Right now we actually do have React Native support, it is now in our docs! As for adding manual context, we're planning on adding that, but we don't want people to go overboard with adding additional context where it's not needed. We want to have suggestions for if we need additional context beyond what we get from the tree traversal. Also, in terms of tone/regional dialect, that's something we're adding into the dashboard so that all translations will have that info passed to the LLM to maintain consistency throughout the app!
I'm not sure if the analogy still works if you're trying to compare fossil fuels to LLM. A few decades ago, virtually all gasoline was full of lead, and the CFCs from refrigerators created a hole in the ozone layer. In that case it turned out that you actually do need a few guardrails as technology advances, to prevent an existential threat.
Although I do agree with you that in this particular situation, the LLM safety features have often felt unnecessary, especially because my primary use case for ChatGPT is asking critical questions about history. When it comes to history, every LLM seems to have an increasingly robust guardrail against making any sort of definitive statement, even after it presents a wealth of supporting evidence.
The alarming trend should be how even a slightly contrarian point of view is downvoted to oblivion, and that newer members of the community expect it to work that way.
HN is a place for intellectual curiosity. For over a decade I have seen great minds respectfully debate their point of view on this forum. In this particular case, I would have been genuinely interested to learn why exactly the original comment is advocating for a "race to the bottom" - in fact, there is a sibling comment to yours which makes a cogent argument without personally attacking the original commenter.
Instead, you devoted 2/3 of your comment toward berating the OP as being responsible for your perception of HN's decline.
I find it strange you took such a measured stance on my comment yet gave the OP a pass, despite it being far more "berating" than mine.
As for a race to the bottom, it's as simple as embracing and unleashing AI despite its lack of quality or ability to produce a product worth anything. But since it's a force multiplier and cheaper (for the user at least, all these AI companies are operating at a loss, see Goldman and JP Morgan's report on the matter), therefore it is "good" and we need to pick ourselves up by our bootstraps; which in this context, I'm not entirely sure what that means.
This is incredible. I uploaded a CSV with ~6000 rows containing campaign finance data for a particularly corrupt local politician and asked "what was the total contributed amount in [year]". Not only did it produce the correct answer (in around the same amount of time it took me to calculate it on my end) but it also seemed to understand that the spreadsheet was related to campaign finance in the "summary" portion of the response.
The most useful aspect was that I could ask "what was the total contributed amount between January and June of 2020" and get an accurate answer for that as well. Since the date column is provided as an "MM/DD/YYYY" string, I would normally have to do some boilerplate work to sanitize this.
For my particular use case, the charting aspect left a few things to be desired - once I grouped campaign donations by contributor, I could only see the first 10 rows in the AI response, with no option to expand the output. But overall I was truly blown away that something like this is even possible for a small team to build.
> For my particular use case, the charting aspect left a few things to be desired - once I grouped campaign donations by contributor, I could only see the first 10 rows in the AI response, with no option to expand the output.
Insert it as a table on the page (you should see a button), it will then print the whole table result from that query into the spreadsheet. Also, you can check the SQL first and validate it, then print to table after that.
Also keep an eye out on the limit - we default to 10,000 to keep it snappy but if you want to make it larger its a click away. The "summarize table" button should auto limit to 1B+ rows.
We are a generally a generation ahead of sheets and excel. You might be able to do some of the things in the older software but it won't be a button press. The ability for you to run data queries (and let the AI do it for you) next to the traditional A1 notation we invented.
This is a great idea, especially when you consider the untold millions that companies like Twilio and Stripe already spend on educating developers. Perhaps along with "language tracks" there could be "API tracks".
I burned out around 3 years ago and couldn't fix it with a year of traveling the world and working on side projects - thankfully I discovered Vipassana meditation which helped me bounce back stronger than ever. Hopefully someone else in a similar situation might find this potential solution useful.
Not OP, but for me what helped the most is the samadi / shamatha meditation where you go into this calm abiding mode. You are calm but alert with the whole body energized. This is a enjoyable state to be in, sometimes _very_ enjoyable. I listened to the guided meditations from the Alan Wallace retreat as well as a guided meditation retreat from Rob Burbea.
Incredible to see this posted today, as I was just in a long discussion yesterday about the future of AI in education. It's a neat concept but I think you're spreading yourself too thin on content, especially given the number of courses on the site. It might be better to focus on building out a core set of lessons for a targeted group of learners first. At the end of the day, even if you're using AI to generate the content, a human being needs to do the proofreading and quality assurance.
The design also has ample room for improvement - it's a bit hard to navigate the site and stay engaged with the content at the moment.
I skipped to the "why build a database" section and then skipped another two minutes of his tangential thoughts - seems like the answer is "because Moore's law"?
In this particular case, it's worth the reminder that BMW's founder (Günther Quandt) was a strong financial supporter of the Nazi party, and his children (Herbert and Harald Quandt) enslaved tens of thousands of people in BMW factories before the collapse of Nazi Germany. I'm not trying to tell anyone to boycott BMW or Mercedes - just trying to add some historical context for those who instantly react to Tesla news with "Elon is bad".
lived in germany for 8 years. the living heiress to a biscuit company fortune is still a nazi sympathiser today. nobody cares about the bmw founder who is dead. every company in germany had ties to the NSDAP, let’s focus on real nazis and antisemites and racists (elon) not dead ones.
Genuinely curious here and not trying to start a debate or sound rhetorical (I just don't keep up with the news that much) - what did Elon do that was racist?
Not that much as far as I can tell. He's against DEI and uncontrolled immigration across the US Mexico border.
He also annoyed me as a Brit recently retweeting some stuff that civil was inevitable in the UK because we'd let a lot of muslims in. Not that I think you shouldn't be able to criticize islam but it's very far from the reality here and there's no need to amplify a lot of bunk from people who have probably never been to the UK.
These days I get a daily dose of amazement at what a small engineering team is able to accomplish.
reply