There actually is some work (https://arxiv.org/abs/2003.13630) claiming that FLOPS are a poor measure of real-world performance - with some of the more recent FLOP-efficient models actually running slower than older models.
Forgive me if I’m being dense, but shouldn’t we expect performance to degrade if flop count per unit time is decreased, assuming performance is defined as overall runtime (FPS in this case)? It’s a trade off scenario where runtime performance is being balanced by other concerns such as power consumption.
This isn't directly relevant to PP-YOLO, but I'm surprised roboflow is still promoting "YOLOv5" - despite that model not having an associated paper and it not being made by the authors of the previous YOLO's.[1]
The ML community has been asking the authors of that model to rename their project[2] because they are basically stealing publicity by making it seem like the next version of YOLO, despite its performance being worse than that of YOLOv4.[3]
Roboflow has deflected this in the past by claiming they don't know if "YOLOv5" is the correct name[4], but by continuing to promote it, they are directly supporting it. In fact, I wouldn't be surprised that their claim of not being affiliated with Ultralytics to be either false or a half truth, given that all the top pages about "YOLOv5" were done by roboflow, including the first official announcement.[5]
This comment is a bit misleading. He's OK with using YOLO, but agrees that using version numbers is misleading. YOLO-v5 is not a succession to YOLO-v4, it's just another version from someone else.
> Just my opinion but I’m happy for anyone to keep using the YOLO name! Just try to avoid version number collisions....
"Avoid version number collisions" means "don't use the same version number". There is nothing in that or any other tweet to indicate he doesn't think that v5 is appropriate, and if you claim otherwise you should provide a citation.
I don't think the original authors mind others using the word "YOLO". It's just ass-holish to call it YOLOv5 if you're not the original author, if at the very least because the original author is probably already working on something they plan to release eventually as "YOLOv5".
If they had called it FOO-YOLO, YOLO-BLAH, YOLO++, or literally anything else, it would probably be perfectly fine.
The original YOLO author publicly announced that he was no longer going to be working on computer vision models and has chimed in and said he has no problem with the name: https://twitter.com/pjreddie/status/1272618558254534657
We’re also the top result when you google eg “How to train yolov4” and several of the top terms for training efficientdet. Hopefully we will be a great source of info on all computer vision models someday. Our mission is to make these things easier for people to use and understand.
Regardless of what you think about its name, YOLOv5 a great model for a lot of use cases. And hundreds of our customers are using it in production and are very satisfied with its performance. Just as many are using YOLOv4. And EfficientDet. And MobileNet SSD v2.
They’re tools, not sports teams. It’s kind of weird that they’ve developed fanbases.
Why are you attacking a "fanbase" mentality when there is none? YOLO stood for a series of networks and subsequent improvements by PJ Redmon, derivative work like PP-YOLO still signals that it's derivative work, but ones like "YOLOv5" signal that it's an updated/ improved version, which it is not.
This weird defence pretty much confirms that Ultralytics and Roboflow are related though.
Just chiming in: I had the similar concerns about Roboflow initially, but to my surprise @josephofiowa from Roboflow reached out to me to discuss it. They set aside time to specifically address a lot of the concerns I raised – e.g. that they seemed to be hyping up a model without doing appropriate benchmarks (they later did a thorough benchmark: https://blog.roboflow.ai/yolov4-versus-yolov5/).
They didn't need to do this. Part of my conversation was "I get it, you're a startup, you have to focus on business value rather than research concerns." But they made the time, and put in the effort, and I feel compelled to at least mention that that happened.
Anyway, as a fellow researcher, I just wanted to put in a good word for Roboflow. Their priorities seem to be in order. I've also learned some interesting things from their yolo breakdowns, e.g. that training time on the newer models is significantly lower.
The many people taking issue with “v5” because it’s not by the same author as “v4” but not with “v4” even though it’s not the same author as “v3” are the “fanbases” I was referring to.
FWIW, the YOLOv4 author noted he's not opposed to Ultralytics's project (https://i.imgur.com/G00DyrX.png) as long as model comparisons are fair.
I don’t think I’m going to convince you that we don’t have some kind of hidden agenda, but we’ll continue to provide support and information about all of the new models.
YOLOv4's authors were connected to previous ones to some extent at least, unlike YOLOv5's 'authors'. I don't particularly care either way, but attacking people put off by intentionally confusing naming is probably not the best move if you're trying to establish credibility.
>They’re tools, not sports teams. It’s kind of weird that they’ve developed fanbases.
Heads up but insulting critics by basically calling them weird obsessed fans is not a good PR strategy. Just saying. Personally I try to avoid companies that do that since I don't know when I may end up on the receiving end for some perceived slight.
edit: Also, odd to name it YOLOv5 presumably due to the strong brand appeal of that name, and then to go and insult people for that brand appeal.
An appropriate quote: "If you can't intelligently argue for both sides of an issue, you don't understand the issue well enough to argue for either."
There are many people for whom the declarative paradigm is a huge plus. I would say there are at least 2 major approaches in running fast neural networks: 1. Figure out the common big components and make fast versions of those. 2. Figure out the common small components and how to make those run fast together.
Different libraries have different strengths and weaknesses that match the abstraction level that they work at. For example, Caffe is the canonical example of approach 1, which makes writing new kinds of layers much harder than with other libraries, but makes connecting those layers quite easy as well as enabling new techniques that work layer-wise (such as new kinds of initialization). Approach 2 (TensorFlow's approach) introduces a lot of complexity, but it allows for different kinds of research. For example, because how you combine the low-level operations is decoupled from how those things are optimized together, you can more easily create efficient versions of new layers without resorting to native code.
After being exposed to several declarative tools during my career, I must say they age poorly: make, autoconf, Tensorflow, and so on. They may start out being elegant, but every successful library is eventually (ab)used for something the original authors didn't envision, and with declarative syntax it descends into madness of "So if I change A to B here does it apply before or after C becomes D?"
At least Tensorflow isn't at that level, because its "declarative" syntax is just yet another imperative language living on top of Python. But it still makes performance debugging really hard.
With PyTorch, I can just sprinkle torch.cuda.synchronize() liberally and the code will tell me exactly which CUDA kernel calls are consuming how much milliseconds. With Tensorflow, I have no idea why it is slow, or whether it can be any faster at all.
I believe that make's declarative is not the cause of it's problems at all - it's poor syntax and lack of support for programming abstractions is what makes it clunky to use.
Something like rake, which operates on the same fundamental principles (i.e. declarative dependency description) but using ruby syntax has aged better.
Indeed. Getting these text based configuration tools work requires a lot of experience in language design.
Lots of tools become accidentally Turing complete, like Make. You need to plan these things from the start. If you want any computation possible at all, you need to be extremely vigilant, and base your language on firm foundations. See eg Dhall, a non-Turing complete configuration language (http://www.haskellforall.com/2016/12/dhall-non-turing-comple...).
If you are happy to get Turing completeness, you might want to write your tool as an embedded DSL and piggy-bank on an existing language, declarative or otherwise.
I took the article to be the counterpoint to the uninhibited praise of TF. In that light, I don't think it was meant as a balanced assessment of the whole product, but had a narrow scope of simply pointing out a handful of flaws that he thinks isn't discussed enough.
It's the same feeling when you hate a movie that everyone gives five stars: you might agree with some aspects of the praise (or even most of it), but that's not what you're going to be talking about. You'll talk about how and why it sucks compared to better movies.
I'd guess he could make a strong pro-TF argument if desired, but that just wasn't the point of this post.
The assumption that there are always two intelligent sides to an issue is a pretty big assumption. If you understand both sides of an issue really deeply and you choose side B and are against side A, you should be able to argue intelligently for side A otherwise your choice of side B is not made intelligently, but this falls down on further examination.
If you believe that side B is correct and side A is incorrect given your deep understanding of the issue then an argument for side A is in some way not intelligent because you must keep out your most potent arguments for side B from your argument for side A - you must deny their existence in your head and thus argue from a less intelligent position than you normally would.
The ability to argue both sides is only really possible when all sides are considered trivial in their differences.
on edit: never mind, I see you mean steelmanning. However that does not really have anything to do with what I said, you should be able to give someone the best defence imaginable, but what if the best defence imaginable is shit compared to the other side. Then you cannot argue both sides equally, this does not mean you do not understand either side. It means one side is actually wrong, and the other is correct.
Necro-cannibalism did allow some to survive severe famine.
Seriously though, the quote works well for many usual cases of discussion. People have incentives which they do believe in - you can discuss how those beliefs started and why you think they don't apply. Sure, you can find pathological edge cases where doing that didn't make sense. Doesn't mean the rule is bad for almost everything else.
Caveat emptor: Scaruffi doesn't hesitate to let his imagination far surpass his apparent handle on the facts, but hey they are just opinions, so let's not ruin the fun too easily.
"I reached the conclusion that the meat diet started with cannibalism: we ate humans before we ate animals. And cannibalism started with women eating their own children, and children eating their grandparents.
Cats do it all the time: when they cannot afford too many kittens, the mom eats some of them. If she didn't, it would reduce the chances that the others survive. Since they are YOUR children, you don't feel too bad eating them: they are flesh of your flesh, aren't they?
You have to think like a woman of two million years ago, who was pregnant all the time, did not know how to prevent pregnancy nor how to perform abortion, and had to feed many children, often with no male partner to help out.
The tradition of eating old people is documented in many cultures: nothing was wasted, and it would have been silly to waste the flesh of a dead person.
Once you start eating your own family members, it becomes natural to start eating other humans. In prehistory (and even not too long ago) humans from other tribes were not perceived as belonging to the same species: they don't speak my language, their customs are different, they smell different, therefore they are not the same species as me. Killing them and eating them was not any more morally wrong than eating a hamburger today.
The main meal was brains: our brains need a lot of proteins, and another brain gives the highest amount of proteins in the shortest time. Hence the tradition of killing an enemy and eating his brain.
Then cost/benefit analysis made humans realize that humans were difficult and dangerous to kill, whereas many animals were easy to kill and provided almost the same proteins. That's when domestication started: feeding on defenseless cows and goats was a lot safer than feeding on humans armed with spears and axes. That's when cannibalism became a thing of the past, of primitive men. However, it was still practiced (and valued) until recently in many places of the world.
We are living in the first age free of cannibalism in the history of the human species. It's an age just a few decades old. The age of cannibalism lasted several million years. Put things in perspective and, as repugnant as it sounds today, cannibalism can be said to be the norm, and non-cannibalism the abnormality. If some day the whole world becomes vegetarian, you the meat eater will look like a savage but today you think of vegetarians as lunatic people.
There are 1500 animal species that are cannibals, and 75 of them are mammals, and one of them is the chimp, our closest relative. And now you may continue to eat your hamburger."
Interesting take. Makes you think about the apparently rather durable popularity of zombie horror, which even outside its occasional fads retains a strong base of adherents.
It didn't "utterly destroy" the quotation. It's pretty easy to argue the virtues of cannibalism, it's just that for modern day society they do not trump the downsides.
For example, cannibalism has successfully prevented death by starvation (e.g. the Donner Party), and it reduces the need for disposing of the dead. It's an application of reduce-reuse-recycle. Some species of other animals practice it as part of their normal lives (praying mantis as part of mating, lions as a social mechanic).
The very obvious counterpoints are that it's completely counter to modern cultural morality and it's been shown to increase likelihood of transferring brain parasites in humans.
The fact that you didn't even approach the other side and immediately concluded that he "destroyed the quotation" kinda shows that you aren't making the slightest effort to understand opposing viewpoints.
Small correction: not brain parasites, but specifically prions (misfolded protein that fascinatingly self-replicate). Kuru[1] (closely related to Creutzfeldt–Jakob disease and Alzheimer's) was extensively studied last century in the Fore people of Papua New Guinea (which were practicing cannibalism as a funerary rite), culminating in the awarding of the 1997 Nobel Prize[2] for the discovery of the prion, a completely new kind of pathogen.
It's a fascinating story! NPR has a great podcast on it.
And for what it's worth, I completely agree. I was given two key pieces of advice by two brilliant professors in my undergraduate philosophy career:
- Always be as charitable as possible with your opponents' arguments: always assume they are taking the best possible position.
- And this goes hand in hand with the above: always be ready for the best possible counter to your own argument, making sure you have an appropriate response. Few arguments are air-tight, especially for controversial issues.
Prions are highly resistant to disinfectants, heat, ultraviolet radiation, ionizing radiation and formalin. ... Prions can be destroyed through incineration providing the incinerator can maintain a temperature of 900 F for four hours.
Not to mention that implied in the quotation is being able to argue both sides of a contentious issue is important to being able to make a case strongly for the one you agree with. Totally irrelevant for generally settled questions like cannabalism, so the "utter destruction" was anything but.
State-of-the-art results no longer require unsupervised pretraining with autoencoders or RBMs, but back when unsupervised pretraining was more popular, top researchers were rationalizing that it was consider more biologically plausible than the standard nets trained with back prop, since brains generalize through observing a large amount of data over their lifetime to quickly recognize new objects and since the nets aren't trained for a specific task, they would hopefully generalize better and be a step closer to general intelligence.
You may be interested in DBpedia, which gets its data from wikipedia but presents it in a structured format (for example, the page for Clojure: http://dbpedia.org/describe/?url=http://dbpedia.org/resource...), though for this case, it seems the data is already on wikipedia.
The problem with Wikipedia as the data source is that there's some goof who goes around deleting articles on programming languages which aren't prolific enough (e.g. Kernel by John Shutt). Some pages don't exist to begin with (or can't exist, because the only source of information is on the language author's own page, and doesn't meet the criteria needed to be listed).
Shame, because some of the languages not listed tend to be the most interesting ones.
I wouldn't go so far as to say that Reagent's model is perfect. Definitely pretty good, but we've had some issues that were somewhat non-obvious.
If you're worried about it being maintained, there was actually a great discussion of this post on the clojure subreddit (https://www.reddit.com/r/Clojure/comments/2jq0cu/om_no_troub...) where others talk about Reagent having a community-maintained fork with new features.
Out of curiousity though, what kind of helper methods would you be looking for? I found it pretty comprehensive on its own (though I do have some utility methods for some niche uses involving hierarchical atoms).
If any of my post sounded as if I was complaining about the lack of syntactic sugar, I apologize as that was not at all my intention. The best practices and syntactic ugliness didn't matter to us at all after the learning curve. We wrote some macros to solve the problem, and didn't have to think about it anymore. We used sablono as well, so the syntax was almost the same as that of Reagent.
The biggest problems that we faced with Om were how normal Clojure-y things just didn't work with it, and how the app state had to be kept in a tree instead of a DAG (mostly the latter), which created issues with consistency, performance, or both.
> trying to put mutating state into the application state object
How you're supposed to use Om is put all the mutable state into a global atom. I didn't mean putting reference types in the global atom or anything horrible like that, but the problems with cursors being pointers still exist.
Yup! We were about to implement something like that, but we realized Reagent's model of having atoms store the state worked pretty well with javelin's FRP, and once you break down the state into atoms, the cursors aren't needed anymore.