Yes, the ESP32 has Bluetooth support and there is an open source library to read the inputs from a PS4 controller. So I just wrote a small program to connect to the controller and send the right commands to the motors.
I think "traditional" RC people use a separate receiver module that links to their remote instead of doing this in software.
In my case it went from a patchy 1mbps to a stable 20mbps right away, according to my router's admin page - right after resoldering the resistor and connecting the antenna.
Resoldering the 0 ohm resistor was like an adventure, but somehow managed to connect the correct pins. The end result looks like following:
Not my field, but from this[1] blog post which references this[2] paper, it would seem so. Note the optimal approach are a bit different between training and inference. Also note that several of the approaches rely on batching multiple requests (prompts) in order to exploit the parallelism, so won't see the same gains if fed only a single prompt at a time.
This video cleared up my confusion and corrected my misconceptions, giving me enough knowledge to hold a one-hour discussion with an actual Toyota mechanic.
It's just experience from a video monitoring project we made for a Telco operator - we had to pay up quite a few JPEG related tech, just to get a certification. Even despite that tech being free.
Historically video related field was one of the most patent and license encumbered. That's why AV1 exists.
Almost no one knows if a project/business idea will be successful or not, so it's not much use asking. It's more productive to ask smart, experienced people how to best validate and execute an idea. People generally give useful and actionable feedback based on their experiences. Just make sure you understand who you're talking to when evaluating someone's advice.
"understand who you're talking to when evaluating someone's advice."
Good you mentioned this, found out to this is a crucial part as well: Always perceive the advice you get depending on that person's background and interests (e.g. your target group, or domain-foreign expert).
I think that people suggest RAG, also because the models develop so fast that very probably the base model you finetune on will be obsolete in a year or so.
If we are approaching diminishing returns it makes more sense to finetune. As the recent advances seem to happen by throwing more compute to CoT etc maybe the time is close or has already come.
There are so many chain types it is easier to do the abbreviations. Basically extend a RAG to have a graph to influence how to either critisize itself or perform different actions. It has gotten to the point where there are libraries for define them. https://langchain-ai.github.io/langgraph/tutorials/introduct...
Fine tuning to a specific codebase is a bit strange. It's going to learn some style/tool guidance which is good (but there are other ways of getting), at the risk of unlearning some generalization it learned from looking at 1,000,000x more code samples of varied styles.
In general I'd suggest trying this first:
- Large context: use large context models to load relevant files. It can pickup your style/tool choices fine this way without fine tuning. I'm usually manually inserting files into context, but a great RAG solution would be ideal.
- Project specific instructions (like .cursorrules): tell it specific things you want. I tell it preferred test tools/strategies/styles.
I am curious to see more detailed evals here, but the claims are too high level to really dive into.
In generally: I love fine tuning for more specific/repeatable tasks. I even have my own fine-tuning platform (https://github.com/Kiln-AI/Kiln). However coding is very broad. Good use case for foundation large models with smart use of context.
I've never looked into RC builds before - how are they controlled? Do you connect the PS4 controller directly to the esp?
reply