Thank you! It's been interesting to watch HN playing around with it. This community definitely phrases its search queries differently from how many government affairs professionals would (especially to try smoke tests like yours), so I'm glad it's holding up :)
Give me keywords to search for based on this sentence "international relations about the country with the capital city of Ankara"
You get the following:
- Turkey international relations
- Ankara diplomacy
- Turkey foreign policy
- Turkey global partnerships
- Turkey international politics
- Turkey geopolitical strategy
- Turkey foreign affairs
- Turkey global relations
- Turkey NATO relations (if relevant to your topic)
- Ankara as a diplomatic hub
So it is not unsurprising that the same link was returned
This is a demo from a small startup dedicated to enhancing government transparency, which I greatly appreciate. As a result, my expectations are aligned with this goal, and I refer to this as a smoke test.
Achieving accuracy with RAG and LLMs is a challenging task that requires balancing precision and recall. For instance, when you type "Ankara" into GPT-4o, it provides information about Turkey. However, searching "Ankara" in their product does not yield articles related to Turkey.
> Achieving accuracy with RAG and LLMs is a challenging task that requires balancing precision and recall
The challenge is domain knowledge and not tech in my opinion. There are dozens if not hundreds of companies providing RAG and LLM, but the challenge is, like you pointed out, what should you do if you encounter something like "Ankara".
For BestBuy, this might not mean much, unless there is a BestBuy in Turkey. For a government related site, cities and geography is important, so trying to extract additional meaning from Ankara is probably important.
which category did you select? if you select custom, it just says to contact them. If I want to search for another region, why should I select UK or US?
Location: New York, NY
Remote: Yes
Willing to relocate: Yes
Technologies: Python, PyTorch, TensorFlow, C/C++, Julia
Résumé/CV: Available upon request. <https:// www.linkedin.com/in/enisberk/>
Email: hire[at]enisberk dot com
Scholar: <https://scholar.google.com/citations?user=AH-sLEkAAAAJ&hl=en>
PhD Candidate in Machine Learning specializing in audio and multimodal data analysis. My research focuses on applying machine learning techniques to extract insights from various audio and sensory data modalities. I'm particularly interested in bridging the gap between audio and other modalities for tasks like speech recognition, sound event detection, and multimodal content analysis.
Experience:
- Developed ML models for audio classification.
- Worked on multimodal data integration, improving efficiency in audio-related tasks.
- Explored low-resource ML techniques to address data scarcity in audio applications.
Seeking opportunities to leverage my expertise in:
- Speech recognition and natural language processing
- Sound event detection and audio classification
- Multimodal learning and LLMs
Location: New York, NY
Remote: Yes
Willing to relocate: Yes
Technologies: Python, PyTorch, TensorFlow, C/C++, Julia
Résumé/CV: Available upon request. <https:// www.linkedin.com/in/enisberk/>
Email: hire[at]enisberk dot com
Scholar: <https://scholar.google.com/citations?user=AH-sLEkAAAAJ&hl=en>
PhD Candidate in Machine Learning specializing in audio and multimodal data analysis. My research focuses on applying machine learning techniques to extract insights from various audio and sensory data modalities. I'm particularly interested in bridging the gap between audio and other modalities for tasks like speech recognition, sound event detection, and multimodal content analysis.
Experience:
- Developed ML models for audio classification.
- Worked on multimodal data integration, improving efficiency in audio-related tasks.
- Explored low-resource ML techniques to address data scarcity in audio applications.
Seeking opportunities to leverage my expertise in:
- Speech recognition and natural language processing
- Sound event detection and audio classification
- Multimodal learning and LLMs
This is really cool work! Congrats on both the paper and the graduation! A long time ago, I worked on optimizing broadcast operations on GPUs [1]. Coming up with a strategy that promises high throughput across different array dimensionalities is quite challenging. I am looking forward to reading your work.
Thanks! Although I still have to actually graduate and the paper is in review, so maybe your congratulations are a bit premature! :)
> A long time ago, I worked on optimizing broadcast operations on GPUs [1].
Something similar happens in Futhark, actually. When something like `[1,2,3] + 4` is elaborated to `map (+) [1,2,3] (rep 4)`, the `rep` is eliminated by pushing the `4` into the `map`: `map (+4) [1,2,3]`. Futhark ultimately then compiles it to efficient CUDA/OpenCL/whatever.
I am a fifth-year Ph.D. student in CS, interested in scalable machine learning algorithms and their applications in bioacoustics.
Location: New York, NY
Remote: Yes
Willing to relocate: Depends on the location.
Technologies: Deep learning (Python, Pytorch), GPU kernel dev (c++, CUDA)
Email: me aaat enisberk.com
Looking for: Summer internship
Congrats on the launch, an important problem to solve. On the other hand, I found your previous idea about pipelines really interesting as well. Do you mind sharing why it did not work out?
It is really cool, indeed. Bravo for the creativity and the effort.
Facebook recently acquired a company called ctrl-labs that develops a neural interface. Their vision is making computer interface more natural.
Bertolt, in the video says “The thing is for me, that is such a natural thing to do, I do not really have to think about it. I just do it, it is zero effort.”(6.23) I love that, I hope we can have such computer interfaces in near future.
I attended one of the talks(1) of the Sam Bowman.
His talk was about "Task-Independent Language Understanding" and he also talked about GLUE and super GLUE; he mentioned that some models are passing an average person in experiments. They did some experiments to understand BERT's performance (2). (similar to article 'NLP's Clever Hans Moment') But they found a different answer to question "what BERT really knows," so he was skeptical about all conclusions. Check these out if you are interested in.
As a smoke test, I tried the following queries, and they returned the same result. Good job!
Both return info from this link: https://www.state.gov/secretary-blinkens-call-with-foreign-m...